C++ Cast double to char and replace into std::array - c++

For RAM optimization purpose, I need to store my data as a std::array<char, N> where N is the "templated size_t" size of the array.
I need to manage a huge amount of "Line" objects that contain any kind of data (numerical and char).
So my Line class is:
template<size_t NByte>
class Line {
public:
Line (std::array<char, NByte> data, std::vector<size_t> offset) :
_data(data), _offset(offset) {}
template<typename T>
T getValue (const size_t& index) const {
return *reinterpret_cast<const T*>(_data.data() + _offset[index]);
}
template<typename T>
void setValue (const size_t& index, const T value) const {
char * new_value = const_cast<char *>(reinterpret_cast<const char *>(&value));
std::move(new_value, new_value + sizeof(T), const_cast<char *>(_data.data() + _offset[index]));
}
private:
std::array<char, NByte> _data;
std::vector<size_t> _offset;
};
My questions are:
Is there a better way to do the setter and getter function?
Is this robust against memory leak?
Is there any problem to use this code in production/release?
Edit: The question behind those is: Is there any other way to work with binary data in memory and provide "human understandable" interface for final user through setter and getter?

Is there any problem in using this code in production/release?
Yes, this code is platform-dependent.
The data will be stored differently in Big-Endian platforms and in Little-Endian platforms.
If your're counting on two systems communicating with each other (transmitting and receiving this data), then you will have to make sure that both sides use platforms of the same Endianness.

Related

How to pack all arrays of bytes of data members everything in a single vector?

I have a created a function serialize which takes the Data { a class containing 4 members
int32,int64,float,double) as input and returns a encoded vector of bytes of all elements which I will further pass to deserialize function to get the original data back.
std::vector<uint8_t> serialize(Data &D)
{
std::vector<uint8_t> seriliazed_data;
std::vector<uint8_t> intwo = encode(D.Int32); // output [32 13 24 0]
std::vector<uint8_t> insf = encode(D.Int64); // output [233 244 55 134 255 23 55]
// float
float ft = D.Float; // float value eg 4.55
float *a; // I will encode them in binary format
char result[sizeof(float)];
memcpy(result, &ft, sizeof(ft));
// double
double dt = D.Double; // double value eg 4.55
double *c; // I will encode them in binary format
char resultdouble[sizeof(double)];
memcpy(resultdouble, &dt, sizeof(dt));
/////
///// How to bind everything here
/////
return seriliazed_data;
}
Data deserialize(std::vector<uint8_t> &Bytes) /// Vector returned from above function {
Data D2;
D2.Int64 = decode(Bytes, D2);
// D2.Int32 = decode(Bytes, D2);
// D2.float = decode(Bytes, D2);
// D2.double = decode(Bytes, D2);
/// Return original data ( All class members)
return D2;
}
I don't have any idea, of how to move forward..
Q1. If I bind everything in a single vector, how would I dissect them while deserializing. there should be some kind of delimiter?
Q2. Is there any better way of doing it.
If I bind everything in a single vector, how would I dissect them while deserializing. there should be some kind of delimiter?
In a stream, you either know what type that comes next - or you'll have to have some sort of type indicator in the stream. "Here comes a vector of int with size ..." etc:
vector int size elem1 elem2 ... elemX
Depending on how many types you need to support, the type information could be 1 or more bytes. If the smallest "unknown" entities are your classes, then you need one indicator per class you aim to support.
If you know exactly what should be in the stream, the type information for vector and int could be left out:
size elem1 elem2 ... elemX
Q2. Is there any better way of doing it.
One simplification could be make serialize more generic so you could reuse it. If you have some
std::vector<uint8_t> encode(conts T& x)
overloads for the fundamental types (and perhaps container types) you'd like to support, you could make it something like this:
template <class... Ts>
std::vector<uint8_t> serialize(Ts&&... ts) {
std::vector<uint8_t> serialized_data;
[](auto& data, auto&&... vs) {
(data.insert(data.end(), vs.begin(), vs.end()), ...);
}(serialized_data, encode(ts)...);
return serialized_data;
}
You could then write serialization for a class simply by calling serialize with all the member variables and you could make serialization of composit types pretty easy:
struct Foo {
int32_t x; // encode(int32_t) needed
std::string y; // encode(const string&) needed
std::vector<std::string> z; // encode(const vector<T>&) + encode(const string&)
};
std::vector<uint8_t> encode(const Foo& f) {
return serialize(f.x, f.y, f.z);
}
struct Bar {
Foo f; // encode(const Foo&) needed
std::string s; // encode(const string&) needed
};
std::vector<uint8_t> encode(const Bar& b) {
return serialize(b.f, b.s);
}
The above makes encoding of classes pretty straight forward. To add serialization, you could add an adapter which simply references the object to serialize, encodes it and writes the encoded data to an ostream:
struct BarSerializer {
Bar& b;
friend std::ostream& operator<<(std::ostream& os, const BarSerializer& bs) {
auto s = encode(bs.b); // encode(const Bar&) needed
return os.write(reinterpret_cast<const char*>(s.data()), s.size());
}
};
You'd make the deserialize function template and decode overloads in a similar manner.
There is a high throughput solution to this, but it requires a number of caveats where it works.
You have a compiler and architecture that supports packed alignment. GCC, clang, ICC, and MSVC all can, but it depends on your architecture as to efficiency. Good news, probably: i386 / x86_64 is pretty much a legend in not paying a penalty for unaligned memory access. SIMD won't work though.
You have to be using POD members in your struct - std::vector, std::string, maps, sets, deques, lists, smart pointers won't work here. But a bundle of ints and floats will be fine. It's possible to work around this with custom reimplementations of those other structures, but let's keep this one simple. You can embed other structs and so on, as long as they are also POD. (POD == Plain Old Data https://en.cppreference.com/w/cpp/language/classes#POD_class)
Your data is coming over the wire in the same endianness for sender and reciever (Also workable around with custom data types that implement e.g. operator int32_t(), but #define or consteval to endianness.)
Your communication channel is sending a single struct repeatedly, or can rely on a common header (for multiple struct types) to do dispatch in a switch.
Your code then becomes:
#pragma pack(push,1)
struct D
{
int32_t Int32;
int64_t Int64;
float Float;
double Double;
};
#pragma pack(pop)
const char * serialize(const Data& d)
{
return reinterpret_cast<const char *>(&d);
}
const Data& deserialize(const char * buffer)
{
return *reinterpret_cast<const Data*>(buffer);
}
The amount of data you need? Always sizeof(D). So serialize will always give a const char * pointer to sizeof(D) data, and you need to have read sizeof(D) data to pass to deserialize.
Now, sure, you can pop all of this in and out of a std::vector<uint8_t>. But the neat thing here, is that there are no memory copies required at all. You can literally use the object itself for serialization, and the raw char * data from whatever medium you deserialize without any copies or expensive field by field operations.
Oh. And edited to add: things like Google protobufs, or Cap'n proto can probably help in the general case of the problems you might be looking to solve.

C++ variable length arrays in struct

I am writing a program for creating, sending, receiving and interpreting ARP packets. I have a structure representing the ARP header like this:
struct ArpHeader
{
unsigned short hardwareType;
unsigned short protocolType;
unsigned char hardwareAddressLength;
unsigned char protocolAddressLength;
unsigned short operationCode;
unsigned char senderHardwareAddress[6];
unsigned char senderProtocolAddress[4];
unsigned char targetHardwareAddress[6];
unsigned char targetProtocolAddress[4];
};
This only works for hardware addresses with length 6 and protocol addresses with length 4. The address lengths are given in the header as well, so to be correct the structure would have to look something like this:
struct ArpHeader
{
unsigned short hardwareType;
unsigned short protocolType;
unsigned char hardwareAddressLength;
unsigned char protocolAddressLength;
unsigned short operationCode;
unsigned char senderHardwareAddress[hardwareAddressLength];
unsigned char senderProtocolAddress[protocolAddressLength];
unsigned char targetHardwareAddress[hardwareAddressLength];
unsigned char targetProtocolAddress[protocolAddressLength];
};
This obviously won't work since the address lengths are not known at compile time. Template structures aren't an option either since I would like to fill in values for the structure and then just cast it from (ArpHeader*) to (char*) in order to get a byte array which can be sent on the network or cast a received byte array from (char*) to (ArpHeader*) in order to interpret it.
One solution would be to create a class with all header fields as member variables, a function to create a byte array representing the ARP header which can be sent on the network and a constructor which would take only a byte array (received on the network) and interpret it by reading all header fields and writing them to the member variables. This is not a nice solution though since it would require a LOT more code.
In contrary a similar structure for a UDP header for example is simple since all header fields are of known constant size. I use
#pragma pack(push, 1)
#pragma pack(pop)
around the structure declaration so that I can actually do a simple C-style cast to get a byte array to be sent on the network.
Is there any solution I could use here which would be close to a structure or at least not require a lot more code than a structure?
I know the last field in a structure (if it is an array) does not need a specific compile-time size, can I use something similar like that for my problem? Just leaving the sizes of those 4 arrays empty will compile, but I have no idea how that would actually function. Just logically speaking it cannot work since the compiler would have no idea where the second array starts if the size of the first array is unknown.
You want a fairly low level thing, an ARP packet, and you are trying to find a way to define a datastructure properly so you can cast the blob into that structure. Instead, you can use an interface over the blob.
struct ArpHeader {
mutable std::vector<uint8_t> buf_;
template <typename T>
struct ref {
uint8_t * const p_;
ref (uint8_t *p) : p_(p) {}
operator T () const { T t; memcpy(&t, p_, sizeof(t)); return t; }
T operator = (T t) const { memcpy(p_, &t, sizeof(t)); return t; }
};
template <typename T>
ref<T> get (size_t offset) const {
if (offset + sizeof(T) > buf_.size()) throw SOMETHING;
return ref<T>(&buf_[0] + offset);
}
ref<uint16_t> hwType() const { return get<uint16_t>(0); }
ref<uint16_t> protType () const { return get<uint16_t>(2); }
ref<uint8_t> hwAddrLen () const { return get<uint8_t>(4); }
ref<uint8_t> protAddrLen () const { return get<uint8_t>(5); }
ref<uint16_t> opCode () const { return get<uint16_t>(6); }
uint8_t *senderHwAddr () const { return &buf_[0] + 8; }
uint8_t *senderProtAddr () const { return senderHwAddr() + hwAddrLen(); }
uint8_t *targetHwAddr () const { return senderProtAddr() + protAddrLen(); }
uint8_t *targetProtAddr () const { return targetHwAddr() + hwAddrLen(); }
};
If you need const correctness, you remove mutable, create a const_ref, and duplicate the accessors into non-const versions, and make the const versions return const_ref and const uint8_t *.
Short answer: you just cannot have variable-sized types in C++.
Every type in C++ must have a known (and stable) size during compilation. IE operator sizeof() must give a consistent answer. Note, you can have types that hold variable amount of data (eg: std::vector<int>) by using the heap, yet the size of the actual object is always constant.
So, you can never produce a type declaration that you would cast and get the fields magically adjusted. This goes deeply into the fundamental object layout - every member (aka field) must have a known (and stable) offset.
Usually, the issue have is solved by writing (or generating) member functions that parse the input data and initialize the object's data. This is basically the age-old data serialization problem, which has been solved countless times in the last 30 or so years.
Here is a mockup of a basic solution:
class packet {
public:
// simple things
uint16_t hardware_type() const;
// variable-sized things
size_t sender_address_len() const;
bool copy_sender_address_out(char *dest, size_t dest_size) const;
// initialization
bool parse_in(const char *src, size_t len);
private:
uint16_t hardware_type_;
std::vector<char> sender_address_;
};
Notes:
the code above shows the very basic structure that would let you do the following:
packet p;
if (!p.parse_in(input, sz))
return false;
the modern way of doing the same thing via RAII would look like this:
if (!packet::validate(input, sz))
return false;
packet p = packet::parse_in(input, sz); // static function
// returns an instance or throws
If you want to keep access to the data simple and the data itself public, there is a way to achieve what you want without changing the way you access data. First, you can use std::string instead of the char arrays to store the addresses:
#include <string>
using namespace std; // using this to shorten notation. Preferably put 'std::'
// everywhere you need it instead.
struct ArpHeader
{
unsigned char hardwareAddressLength;
unsigned char protocolAddressLength;
string senderHardwareAddress;
string senderProtocolAddress;
string targetHardwareAddress;
string targetProtocolAddress;
};
Then, you can overload the conversion operator operator const char*() and the constructor arpHeader(const char*) (and of course operator=(const char*) preferably too), in order to keep your current sending/receiving functions working, if that's what you need.
A simplified conversion operator (skipped some fields, to make it less complicated, but you should have no problem in adding them back), would look like this:
operator const char*(){
char* myRepresentation;
unsigned char mySize
= 2+ senderHardwareAddress.length()
+ senderProtocolAddress.length()
+ targetHardwareAddress.length()
+ targetProtocolAddress.length();
// We need to store the size, since it varies
myRepresentation = new char[mySize+1];
myRepresentation[0] = mySize;
myRepresentation[1] = hardwareAddressLength;
myRepresentation[2] = protocolAddressLength;
unsigned int offset = 3; // just to shorten notation
memcpy(myRepresentation+offset, senderHardwareAddress.c_str(), senderHardwareAddress.size());
offset += senderHardwareAddress.size();
memcpy(myRepresentation+offset, senderProtocolAddress.c_str(), senderProtocolAddress.size());
offset += senderProtocolAddress.size();
memcpy(myRepresentation+offset, targetHardwareAddress.c_str(), targetHardwareAddress.size());
offset += targetHardwareAddress.size();
memcpy(myRepresentation+offset, targetProtocolAddress.c_str(), targetProtocolAddress.size());
return myRepresentation;
}
While the constructor can be defined as such:
ArpHeader& operator=(const char* buffer){
hardwareAddressLength = buffer[1];
protocolAddressLength = buffer[2];
unsigned int offset = 3; // just to shorten notation
senderHardwareAddress = string(buffer+offset, hardwareAddressLength);
offset += hardwareAddressLength;
senderProtocolAddress = string(buffer+offset, protocolAddressLength);
offset += protocolAddressLength;
targetHardwareAddress = string(buffer+offset, hardwareAddressLength);
offset += hardwareAddressLength;
targetProtocolAddress = string(buffer+offset, protocolAddressLength);
return *this;
}
ArpHeader(const char* buffer){
*this = buffer; // Re-using the operator=
}
Then using your class is as simple as:
ArpHeader h1, h2;
h1.hardwareAddressLength = 3;
h1.protocolAddressLength = 10;
h1.senderHardwareAddress = "foo";
h1.senderProtocolAddress = "something1";
h1.targetHardwareAddress = "bar";
h1.targetProtocolAddress = "something2";
cout << h1.senderHardwareAddress << ", " << h1.senderProtocolAddress
<< " => " << h1.targetHardwareAddress << ", " << h1.targetProtocolAddress << endl;
const char* gottaSendThisSomewhere = h1;
h2 = gottaSendThisSomewhere;
cout << h2.senderHardwareAddress << ", " << h2.senderProtocolAddress
<< " => " << h2.targetHardwareAddress << ", " << h2.targetProtocolAddress << endl;
delete[] gottaSendThisSomewhere;
Which should offer you the utility needed, and keep your code working without changing anything out of the class.
Note however that if you're willing to change the rest of the code a bit (talking here about the one you've written already, ouside of the class), jxh's answer should work as fast as this, and is more elegant on the inner side.

Calculating a hash from a 6-byte field?

I'm looking for an efficient way to hash a 6-byte field so that it can be used for std::unordered_map .
I think this would be the conventional way of creating a hash:
struct Hash {
std::size_t operator()(const std::array<uint8_t, 6> & mac) const {
std::size_t key = 0;
boost::hash_combine(key, mac[0]);
boost::hash_combine(key, mac[1]);
boost::hash_combine(key, mac[2]);
boost::hash_combine(key, mac[3]);
boost::hash_combine(key, mac[4]);
boost::hash_combine(key, mac[5]);
return key;
}
};
However I noticed that I can make it a little faster (~20%) using this trick:
struct Hash {
std::size_t operator()(const std::array<uint8_t, 6> & mac) const {
std::size_t key = 0;
// Possibly UB?
boost::hash_combine(key, reinterpret_cast<const uint32_t&>(mac[0]));
boost::hash_combine(key, reinterpret_cast<const uint16_t&>(mac[4]));
return key;
}
};
And this was even faster:
struct Hash {
std::size_t operator()(const std::array<uint8_t, 6> & mac) const {
// Requires size_t to be 64-bit.
static_assert(sizeof(std::size_t) >= 6, "MAC address doesn't fit in std::size_t!");
std::size_t key = 0;
// Likely UB?
boost::hash_combine(key, 0x0000FFFFFFFFFFFF & reinterpret_cast<const uint64_t&>(mac[0]));
return key;
}
};
My question is two-fold:
Are these optimizations going to result in UB?
Is my first solution the way to go? Or is there a better way?
Your optimizations are breaking the strict aliasing rules, which leads (standardly speaking) to undefined behavior.
The last optimization worries me the most since you are essentially reading memory you ought not to, which may provoke traps if this memory happened to be protected.
Any reason you are not using boost::hash_range ?
Since boost::hash_range turns out not to be as fast as required, I would propose another solution, based on aliasing. Or rather, two solutions in one.
The first idea is that aliasing can be subdued using char* as a temporary type.
size_t key = 0;
char* k = &reinterpret_cast<char*>(&key);
std::copy(mac.begin(), mac.end(), k);
return key;
is therefore a valid implementation of the hash.
However, we can go one step further. Because of alignment and padding, storing a char[6] and char[8] are likely to use the same amount of memory within a map node. Therefore, we could enrich the type, by using union:
union MacType {
unsigned char value[8];
size_t hash;
};
Now, you can encapsulate this properly within a class (and make sure you always initialize the bytes 7 and 8 to 0), and implement the interface of std::array<unsigned char, 6> that you actually need.
I've used a similar trick for tiny strings (below 8 characters) for hashing and fast (non-alphabetic) comparisons and it's really sweet.

Simplest way to read binary data from a std::vector<unsigned char>?

I have a lump of binary data in the form of const std::vector<unsigned char>, and want to be able to extract individual fields from that, such as 4 bytes for an integer, 1 for a boolean, etc. This needs to be, as far as possible, both efficient and simple. eg. It should be able to read the data in place without needing to copy it (eg. into a string or array). And it should be able to read one field at a time, like a parser, since the lump of data does not have a fixed format. I already know how to determine what type of field to read in each case - the problem is getting a usable interface on top of an std::vector for doing this.
However I can't find a simple way to get this data into an easily usable form that gives me useful read functionality. eg. std::basic_istringstream<unsigned char> gives me a reading interface, but it seems like I need to copy the data into a temporary std::basic_string<unsigned char> first, which is not idea for bigger blocks of data.
Maybe there is some way I can use a streambuf in this situation to read the data in place, but it would appear that I'd need to derive my own streambuf class to do that.
It occurs to me that I can probably just use sscanf on the vector's data(), and that would seem to be both more succinct and more efficient than the C++ standard library alternatives. EDIT: Having been reminded that sscanf doesn't do what I wrongly thought it did, I actually don't know a clean way to do this in C or C++. But am I missing something, and if so, what?
You have access to the data in a vector through its operator[]. A vector's data is guranteed to be stored in a single contiguous array, and [] returns a reference to a member of that array. You may use that reference directly, or through a memcpy.
std::vector<unsigned char> v;
...
byteField = v[12];
memcpy(&intField, &v[13], sizeof intField);
memcpy(charArray, &v[20], lengthOfCharArray);
EDIT 1:
If you want something "more convenient" that that, you could try:
template <class T>
ReadFromVector(T& t, std::size_t offset,
const std::vector<unsigned char>& v) {
memcpy(&t, &v[offset], sizeof(T));
}
Usage would be:
std::vector<unsigned char> v;
...
char c;
int i;
uint64_t ull;
ReadFromVector(c, 17, v);
ReadFromVector(i, 99, v);
ReadFromVector(ull, 43, v);
EDIT 2:
struct Reader {
const std::vector<unsigned char>& v;
std::size_t offset;
Reader(const std::vector<unsigned char>& v) : v(v), offset() {}
template <class T>
Reader& operator>>(T&t) {
memcpy(&t, &v[offset], sizeof t);
offset += sizeof t;
return *this;
}
void operator+=(int i) { offset += i };
char *getStringPointer() { return &v[offset]; }
};
Usage:
std::vector<unsigned char> v;
Reader r(v);
int i; uint64_t ull;
r >> i >> ull;
char *companyName = r.getStringPointer();
r += strlen(companyName);
If your vector stores binary data, you can't use sscanf or similar, they work on text.
For converting a byte for a bool is simple enough
bool b = my_vec[10];
For extracting an unsigned int that's stored in big endian order (assuming your ints are 32 bits):
unsigned int i = my_vec[10] << 24 | my_vec[11] << 16 | my_vec[12] << 8 | my_vec[13];
A 16 bit unsigned short would be similar:
unsigned short s = my_vec[10] << 8 | my_vec[11];ยจ
If you can afford the Qt dependency, QByteArray has the fromRawData() named constructor, which wraps existing data buffers in a QByteArray without copying the data. With that byte array, you can the feed a QTextStream.
I'm not aware of any such function in the standard streams library (short of implementing your own streambuf, of course), but I'd love to be proved wrong :)
You can use a struct that describes the data you are trying to extract. You can move data from your vector into the struct like this:
struct MyData {
int intVal;
bool boolVal;
char[15] stringVal;
} __attribute__((__packed__));
// assuming all extracted types are prefixed with a one byte indicator.
// Also assumes "vec" is your populated vector
int pos = 0;
while (pos < vec.size()-1) {
switch(vec[pos++]) {
case 0: { // handle int
int intValue;
memcpy(&vec[pos], &intValue, sizeof(int));
pos += sizeof(int);
// do something with handled value
break;
}
case 1: { // handle double
double doubleValue;
memcpy(&vec[pos], &doubleValue, sizeof(double));
pos += sizeof(double);
// do something with handled value
break;
}
case 2: { // handle MyData
struct MyData data;
memcpy(&vec[pos], &data, sizeof(struct MyData));
pos += sizeof(struct MyData);
// do something with handled value
break;
}
default: {
// ERROR: unknown type indicator
break;
}
}
}
Use a for loop to iterate over the vector and use bitwise operators to access each bit group. For example, to access the upper four bits of the first usigned char in your vector:
int myInt = vec[0] & 0xF0;
To read the fifth bit from the right, right after the chunk we just read:
bool myBool = vec[0] & 0x08;
The three least significant (lowest) bits can be accesed like so:
int myInt2 = vec[0] & 0x07;
You can then repeat this process (using a for loop) for every element in your vector.

memcmp sort

I have a single buffer, and several pointers into it. I want to sort the pointers based upon the bytes in the buffer they point at.
qsort() and stl::sort() can be given custom comparision functions. For example, if the buffer was zero-terminated I could use strcmp:
int my_strcmp(const void* a,const void* b) {
const char* const one = *(const char**)a,
const two = *(const char**)b;
return ::strcmp(one,two);
}
however, if the buffer is not zero-terminated, I have to use memcmp() which requires a length parameter.
Is there a tidy, efficient way to get the length of the buffer into my comparision function without a global variable?
With std::sort, you can use a Functor like this:
struct CompString {
CompString(int len) : m_Len(len) {}
bool operator<(const char *a, const char *b) const {
return std::memcmp(a, b, m_Len);
}
private:
int m_Len;
};
Then you can do this:
std::sort(begin(), end(), CompString(4)); // all strings are 4 chars long
EDIT: from the comment suggestions (i guess both strings are in a common buffer?):
struct CompString {
CompString (const unsigned char* e) : end(e) {}
bool operator()(const unsigned char *a, const unsigned char *b) const {
return std::memcmp(a, b, std::min(end - a, end - b)) < 0;
}
private:
const unsigned char* const end;
};
With the C function qsort(), no, there is no way to pass the length to your comparison function without using a global variable, which means it can't be done in a thread-safe manner. Some systems have a qsort_r() function (r stands for reentrant) which allows you to pass an extra context parameter, which then gets passed on to your comparison function:
int my_comparison_func(void *context, const void *a, const void *b)
{
return memcmp(*(const void **)a, *(const void **)b, (size_t)context);
}
qsort_r(data, n, sizeof(void*), (void*)number_of_bytes_to_compare, &my_comparison_func);
Is there a reason you can't null-terminate your buffers?
If not, since you're using C++ you can write your own function object:
struct MyStrCmp {
MyStrCmp (int n): length(n) { }
inline bool operator< (char *lhs, char *rhs) {
return ::strcmp (lhs, rhs, length);
}
int length;
};
// ...
std::sort (myList.begin (), myList.end (), MyStrCmp (STR_LENGTH));
Can you pack your buffer pointer + length into a structure and pass a pointer of that structure as void *?
You could use a hack like:
int buffcmp(const void *b1, const void *b2)
{
static int bsize=-1;
if(b2==NULL) {bsize=*(int*)(b1); return 0;}
return memcmp(b1, b2, idsize);
}
which you would first call as buffcmp(&bsize, NULL) and then pass it as the comparison function to qsort.
You could of course make the comparison behave more naturally in the case of buffcmp(NULL, NULL) etc by adding more if statements.
You could functors (give the length to the functor's constructor) or Boost.Lambda (use the length in-place).
I'm not clear on what you're asking. But I'll try, assuming that
You have a single buffer
You have an array of pointers of some kind which has been processed in some way so that some or all of its contents point into the buffer
That is code equivalent to:
char *buf = (char*)malloc(sizeof(char)*bufsize);
for (int i=0; i<bufsize; ++i){
buf[i] = some_cleverly_chosen_value(i);
}
char *ary[arraysize] = {0};
for(int i=0; i<arraysize; ++i){
ary[i] = buf + some_clever_function(i);
}
/* ...do the sort here */
Now if you control the allocation of the buffer, you could substitute
char *buf = (char*)malloc(sizeof(char)*(bufsize+1));
buf[bufsize]='\0';
and go ahead using strcmp. This may be possible even if you don't control the filling of the buffer.
If you have to live with a buffer handed you by someone else you can
Use some global storage (which you asked to avoid and good thinking).
Hand the sort function something more complicated than a raw pointer (the address of a struct or class that supports the extra data). For this you need to control the deffinition of ary in the above code.
Use a sort function which supports an extra input. Either sort_r as suggested by Adam, or a home-rolled solution (which I do recommend as an exercise for the student, and don't recommend in real life). In either case the extra data is probably a pointer to the end of the buffer.
memcmp should stop on the first byte that is unequal, so the length should be large, i.e. to-the-end-of-the-buffer. Then the only way it can return zero is if it does go to the end of the buffer.
(BTW, I lean toward merge sort myself. It's stable and well-behaved.)