recasting a unsigned char array to an unsigned long array - c++

Ok, I'm using a raw SHA1 hash to seed a Mersenne Twister pseudo-random number generator
the generator gives me the option to seed either with an unsigned long or a array of unsigned longs
the SHA1 class I'm using gives me the hash as a 20 byte array of unsigned chars
I figured I could recast this array of chars to an array of longs to get a working seed but how can I know how long the resulting array of longs is?
example code:
CSHA1 sha1;
sha1.Update((unsigned char*)key, size_key);
sha1.Final();
unsigned char* hash;
sha1.GetHash(hash);
// Seed the random with the key
MTRand mt((unsigned long*)hash, <size of array of longs>);
I'm hoping that there is no data loss (as in no bytes are dropped off) as I need this to remain cryptography secure

You can say
sizeof(unsigned long) / sizeof(unsigned char)
to get the number of octets in a long.
However there are two potential problems with simply casting.
First, the array of chars might not be properly aligned. On some processors this can cause a trap. On others it just slows execution.
Second, you're asking for byte order problems if the program must work the same way on different architecutures.
You can solve both problems by copying the bytes into an array of longs explicitly. Untested code:
const int bytes_per_long = sizeof(unsigned long) / sizeof(unsigned char);
unsigned long hash_copy[key_length_in_bytes / bytes_per_long];
int i_hash = 0;
for (int i_copy = 0; i_copy < sizeof hash_copy / sizeof hash_copy[0]; i_copy++) {
unsigned long b = 0;
for (int i_byte = 0; i_byte < bytes_per_long; i_byte++)
b = (b << 8) | hash[i_hash++];
hash_copy[i_copy] = b;
}
// Now use hash_copy.

You can use len_of_chars * sizeof(char) / sizeof(long), where len_of_chars is presumably 20.

Your library seems to assume 32-bit unsigned longs, so there's no [more] harm in you doing the same. In fact, I'd go as far to assume 8-bit unsigned chars and perhaps even unpadded, little-endian representations for both. So you could use a simple cast (though I'd use a reinterpret_cast), or maybe #Gene's memcpy sample for alignment.
Portable code*, however, should use <cstdint>, the uint#_t types therein and piecewise, by-value copying for conversion:
uint32_t littleEndianInt8sToInt32(uint8_t bytes[4]) {
return bytes[0] | (bytes[1] << 8) | (bytes[2] << 16) | (bytes[3] << 24);
}
...and better names. Sorry, it's getting late here :)
*: Though, of course, stdint itself isn't very portable (>= C++11) and the exact-width types aren't guaranteed to be in it. Ironic.

Related

XTEA-function with std::vector

I'm trying to encrypt a std::vector with XTEA. Because using std::vector brings various benefits dealing with big amounts of data, i want to use it.
The XTEA-Alogrithm uses two unsigned longs (v0 and v1) which take 64 bits of data, to encrypt them.
xtea_enc(unsigned char buf[], int length, unsigned char key[], unsigned char** outbuf)
/* Source http://pastebin.com/uEvZqmUj */
unsigned long v0 = *((unsigned long*)(buf+n));
unsigned long v1 = *((unsigned long*)(buf+n+4));
My problem is, that I'm looking for the best way to convert my char vector into a unsigned long pointer.
Or is there another way to split vector in 64-bit parts for the encryption function?
The insight comes in realizing that each char is a byte; thus a 64 bit number consists of 8 bytes or two 32 bit numbers.
Thus one 32 bit number can store 4 bytes, so you would for each 8 byte block in your char vector, store a pair of 4 byte numbers in a pair of 32 bit numbers. You would then pass this pair in to your xtea function, something like:
uint32_t datablock[2];
datablock[0] = (buf[0] << 24) | (buf[1] << 16) | (buf[2] << 8) | (buf[3]);
datablock[1] = (buf[4] << 24) | (buf[5] << 16) | (buf[6] << 8) | (buf[7]);
where in this example, buf is the type char[8] (or more appropriately uint8_t[8]).
The bit-shift '<<' operator shifts the placement of where a given byte's bits should be stored in the uint32_t (thus for example, the first byte in the above example is stored in the first 8 bits of datablock[0]). The '|' operator provides a concatenaton of all bits so that you end up with the full 32 bit number. Hope that makes sense.
My problem is, that I'm looking for the best way to convert my char vector into a unsigned long pointer.
((unsigned long*)vec.data()) since C++11 or ((unsigned long*)&vec[0])) pre-c++11?
PS: i guess someone will come along and argue that it should be a reinterpret_cast<unsigned long*>() or something sooner or later, and they'll probably be right.
also, i used a std::string, but here's how i did the enciper loop:
string message = readMessage();
for (size_t i = 0; i < message.length(); i += 8)
{
encipher(32, (uint32_t *)&message[i], keys);
}
// now message is encrypted
and
for (size_t i = 0; i < message.length(); i += 8)
{
decipher(32, (uint32_t *)&message[i], keys);
}
// now message is decrypted (still may have padding bytes tho)
and i just used the sample C enciper/deciper functions from XTEA's wikipedia page.

How to store double - endian independent

Despite the fact that big-endian computers are not very widely used, I want to store the double datatype in an independant format.
For int, this is really simple, since bit shifts make that very convenient.
int number;
int size=sizeof(number);
char bytes[size];
for (int i=0; i<size; ++i)
bytes[size-1-i] = (number >> 8*i) & 0xFF;
This code snipet stores the number in big endian format, despite the machine it is being run on. What is the most elegant way to do this for double?
The best way for portability and taking format into account, is serializing/deserializing the mantissa and the exponent separately. For that you can use the frexp()/ldexp() functions.
For example, to serialize:
int exp;
unsigned long long mant;
mant = (unsigned long long)(ULLONG_MAX * frexp(number, &exp));
// then serialize exp and mant.
And then to deserialize:
// deserialize to exp and mant.
double result = ldexp ((double)mant / ULLONG_MAX, exp);
The elegant thing to do is to limit the endianness problem to as small a scope as possible. That narrow scope is the I/O boundary between your program and the outside world. For example, the functions that send binary data to / receive binary data from some other application need to be aware of the endian problem, as do the functions that write binary data to / read binary data from some data file. Make those interfaces cognizant of the representation problem.
Make everything else blissfully ignorant of the problem. Use the local representation everywhere else. Represent a double precision floating point number as a double rather than an array of 8 bytes, represent a 32 bit integer as an int or int32_t rather than an array of 4 bytes, et cetera. Dealing with the endianness problem throughout your code is going to make your code bloated, error prone, and ugly.
The same. Any numeric object, including double, is eventually several bytes which are interpreted in a specific order according to endianness. So if you revert the order of the bytes you'll get exactly the same value in the reversed endianness.
char *src_data;
char *dst_data;
for (i=0;i<N*sizeof(double);i++) *dst_data++=src_data[i ^ mask];
// where mask = 7, if native == low endian
// mask = 0, if native = big_endian
The elegance lies in mask which handles also short and integer types: it's sizeof(elem)-1 if the target and source endianness differ.
Not very portable and standards violating, but something like this:
std::array<unsigned char, 8> serialize_double( double const* d )
{
std::array<unsigned char, 8> retval;
char const* begin = reinterpret_cast<char const*>(d);
char const* end = begin + sizeof(double);
union
{
uint8 i8s[8];
uint16 i16s[4];
uint32 i32s[2];
uint64 i64s;
} u;
u.i64s = 0x0001020304050607ull; // one byte order
// u.i64s = 0x0706050403020100ull; // the other byte order
for (size_t index = 0; index < 8; ++index)
{
retval[ u.i8s[index] ] = begin[index];
}
return retval;
}
might handle a platform with 8 bit chars, 8 byte doubles, and any crazy-ass byte ordering (ie, big endian in words but little endian between words for 64 bit values, for example).
Now, this doesn't cover the endianness of doubles being different than that of 64 bit ints.
An easier approach might be to cast your double into a 64 bit unsigned value, then output that as you would any other int.
void reverse_endian(double number, char (&bytes)[sizeof(double)])
{
const int size=sizeof(number);
memcpy(bytes, &number, size);
for (int i=0; i<size/2; ++i)
std::swap(bytes[i], bytes[size-i-1]);
}

How to get values from unaligned memory in a standard way?

I know C++11 has some standard facilities which would allow to get integral values from unaligned memory. How could something like this be written in a more standard way?
template <class R>
inline R get_unaligned_le(const unsigned char p[], const std::size_t s) {
R r = 0;
for (std::size_t i = 0; i < s; i++)
r |= (*p++ & 0xff) << (i * 8); // take the first 8-bits of the char
return r;
}
To take the values stored in litte-endian order, you can then write:
uint_least16_t value1 = get_unaligned_le<uint_least16_t > (&buffer[0], 2);
uint_least32_t value2 = get_unaligned_le<uint_least32_t > (&buffer[2], 4);
How did the integral values get into the unaligned memory to begin with?
If they were memcpyed in, then you can use memcpy to get them out.
If they were read from a file or the network, you have to know their
format: how they were written to begin with. If they are four byte
big-endian 2s complement (the usual network format), then something
like:
// Supposes native int is at least 32 bytes...
unsigned
getNetworkInt( unsigned char const* buffer )
{
return buffer[0] << 24
| buffer[1] << 16
| buffer[2] << 8
| buffer[3];
}
This will work for any unsigned type, provided the type you're aiming
for is at least as large as the type you input. For signed, it depends
on just how portable you want to be. If all of your potential target
machines are 2's complement, and will have an integral type with the
same size as your input type, then you can use exactly the same code as
above. If your native machine is a 1's complement 36 bit machine (e.g.
a Unisys mainframe), and you're reading signed network format integers
(32 bit 2's complement), you'll need some additional logic.
As always, create the desired variable and populate it byte-wise:
#include <algorithm>
#include <type_traits>
template <typename R>
R get(unsigned char * p, std::size_t len = sizeof(R))
{
assert(len >= sizeof(R) && std::is_trivially_copyable<R>::value);
R result;
std::copy(p, p + sizeof(R), static_cast<unsigned char *>(&result));
return result;
}
This only works universally for trivially copyable types, though you can probably use it for on-trivial types if you have additional guarantees from elsewhere.

I need to create a very large array of bits/boolean values. How would I do this in C/C++?

Is it even possible to create an array of bits with more than 100000000 elements? If it is, how would I go about doing this? I know that for a char array I can do this:
char* array;
array = (char*)malloc(100000000 * sizeof(char));
If I was to declare the array by char array[100000000] then I would get a segmentation fault, since the maximum number of elements has been exceeded, which is why I use malloc.
Is there something similar I can do for an array of bits?
If you are using C++, std::vector<bool> is specialized to pack elements into a bit map. Of course, if you are using C++, you need to stop using malloc.
You could try looking at boost::dynamic_bitset. Then you could do something like the following (taken from Boost's example page):
boost::dynamic_bitset<> x(100000000); // all 0's by default
x[0] = 1;
x[1] = 1;
x[4] = 1;
The bitset will use a single bit for each element so you can store 32 items in the space of 4 bytes, decreasing the amount of memory required considerably.
In C and C++, char is the smallest type. You can't directly declare an array of bits. However, since an array of any basic type is fundamentally made of bits, you can emulate them, something like this (code untested):
unsigned *array;
array = (unsigned *) malloc(100000000 / sizeof(unsigned) + 1);
/* Retrieves the value in bit i */
#define GET_BIT(array, i) (array[i / sizeof(unsigned)] & (1 << (i % sizeof(unsigned))))
/* Sets bit i to true*/
#define SET_BIT(array, i) (array[i / sizeof(unsigned)] |= (1 << (i % sizeof(unsigned))))
/* Sets bit i to false */
#define CLEAR_BIT(array, i) (array[i / sizeof(unsigned)] &= ~(1 << (i % sizeof(unsigned))))
The segmentation fault you noticed is due to running out of stack space. Of course you can't declare a local variable that is 12.5 MB in size (100 million bits), let alone 100MB in size (100 million bytes) in a thread with a stack of ~ 4 MB. Should work as a global variable, although then you may end up with a 12 or 100 MB executable file -- still not a good idea. Dynamic allocation is definitely the right thing to do for large buffers like that.
If it is allowed to use STL, then I would use std::bitset.
(For 100,000,000 bits, it would use 100000000 / 32 unsigned int underneath, each storing 32 bits.)
std::vector<bool>, already mentioned, is another good solution.
There are a few approaches to creating a bitmap in C++.
If you already know the size of bitmap at compile time, you can use the STL, std::bitset template.
This is how you would do it with bitset
std::bitset<100000000> array
Otherwise, if the size of the bitmap changes dynamically during runtime, you can use std::vector<bool> or boost::dynamic_bitset as recommended here http://en.cppreference.com/w/cpp/utility/bitset (See note at the bottom)
Yes but it's going to be a little bit more complicated !
The better way to store bits is to use the bits into the char itself !
So you can store 8 bits in a char !
Which will "only" require 12'500'000 octets !
Here is some documentation about binaries : http://www.somacon.com/p125.php
You should look on google :)
Other solution:
unsigned char * array;
array = (unsigned char *) malloc ( 100000000 / sizeof(unsigned char) + 1);
bool MapBit ( unsigned char arraybit[], DWORD position, bool set)
{
//work for 0 at 4294967295 bit position
//calc bit position
DWORD bytepos = ( position / 8 );
//
unsigned char bitpos = ( position % 8);
unsigned char bit = 0x01;
//get bit
if ( bitpos )
{
bit = bit << bitpos;
}
if ( set )
{
arraybit [ bytepos ] |= bit;
}
else
{
//get
if ( arraybit [ bytepos ] & bit )
return true;
}
return false;
}
I'm fond of the bitarray that's in the open source fxt library at http://www.jjj.de/fxt/. It's simple, efficient and contained in a few headers, so it's easy to add to your project. Plus there's many complementary functions to use with the bitarray (see http://www.jjj.de/bitwizardry/bitwizardrypage.html).

Reading "integer" size bytes from a char* array.

I want to read sizeof(int) bytes from a char* array.
a) In what scenario's do we need to worry if endianness needs to be checked?
b) How would you read the first 4 bytes either taking endianness into consideration or not.
EDIT : The sizeof(int) bytes that I have read needs to be compared with an integer value.
What is the best approach to go about this problem
Do you mean something like that?:
char* a;
int i;
memcpy(&i, a, sizeof(i));
You only have to worry about endianess if the source of the data is from a different platform, like a device.
a) You only need to worry about "endianness" (i.e., byte-swapping) if the data was created on a big-endian machine and is being processed on a little-endian machine, or vice versa. There are many ways this can occur, but here are a couple of examples.
You receive data on a Windows machine via a socket. Windows employs a little-endian architecture while network data is "supposed" to be in big-endian format.
You process a data file that was created on a system with a different "endianness."
In either of these cases, you'll need to byte-swap all numbers that are bigger than 1 byte, e.g., shorts, ints, longs, doubles, etc. However, if you are always dealing with data from the same platform, endian issues are of no concern.
b) Based on your question, it sounds like you have a char pointer and want to extract the first 4 bytes as an int and then deal with any endian issues. To do the extraction, use this:
int n = *(reinterpret_cast<int *>(myArray)); // where myArray is your data
Obviously, this assumes myArray is not a null pointer; otherwise, this will crash since it dereferences the pointer, so employ a good defensive programming scheme.
To swap the bytes on Windows, you can use the ntohs()/ntohl() and/or htons()/htonl() functions defined in winsock2.h. Or you can write some simple routines to do this in C++, for example:
inline unsigned short swap_16bit(unsigned short us)
{
return (unsigned short)(((us & 0xFF00) >> 8) |
((us & 0x00FF) << 8));
}
inline unsigned long swap_32bit(unsigned long ul)
{
return (unsigned long)(((ul & 0xFF000000) >> 24) |
((ul & 0x00FF0000) >> 8) |
((ul & 0x0000FF00) << 8) |
((ul & 0x000000FF) << 24));
}
Depends on how you want to read them, I get the feeling you want to cast 4 bytes into an integer, doing so over network streamed data will usually end up in something like this:
int foo = *(int*)(stream+offset_in_stream);
The easy way to solve this is to make sure whatever generates the bytes does so in a consistent endianness. Typically the "network byte order" used by various TCP/IP stuff is
best: the library routines htonl and ntohl work very well with this, and they
are usually fairly well optimized.
However, if network byte order is not being used, you may need to do things in
other ways. You need to know two things: the size of an integer, and the byte order.
Once you know that, you know how many bytes to extract and in which order to put
them together into an int.
Some example code that assumes sizeof(int) is the right number of bytes:
#include <limits.h>
int bytes_to_int_big_endian(const char *bytes)
{
int i;
int result;
result = 0;
for (i = 0; i < sizeof(int); ++i)
result = (result << CHAR_BIT) + bytes[i];
return result;
}
int bytes_to_int_little_endian(const char *bytes)
{
int i;
int result;
result = 0;
for (i = 0; i < sizeof(int); ++i)
result += bytes[i] << (i * CHAR_BIT);
return result;
}
#ifdef TEST
#include <stdio.h>
int main(void)
{
const int correct = 0x01020304;
const char little[] = "\x04\x03\x02\x01";
const char big[] = "\x01\x02\x03\x04";
printf("correct: %0x\n", correct);
printf("from big-endian: %0x\n", bytes_to_int_big_endian(big));
printf("from-little-endian: %0x\n", bytes_to_int_little_endian(little));
return 0;
}
#endif
How about
int int_from_bytes(const char * bytes, _Bool reverse)
{
if(!reverse)
return *(int *)(void *)bytes;
char tmp[sizeof(int)];
for(size_t i = sizeof(tmp); i--; ++bytes)
tmp[i] = *bytes;
return *(int *)(void *)tmp;
}
You'd use it like this:
int i = int_from_bytes(bytes, SYSTEM_ENDIANNESS != ARRAY_ENDIANNESS);
If you're on a system where casting void * to int * may result in alignment conflicts, you can use
int int_from_bytes(const char * bytes, _Bool reverse)
{
int tmp;
if(reverse)
{
for(size_t i = sizeof(tmp); i--; ++bytes)
((char *)&tmp)[i] = *bytes;
}
else memcpy(&tmp, bytes, sizeof(tmp));
return tmp;
}
You shouldn't need to worry about endianess unless you are reading the bytes from a source created on a different machine, e.g. a network stream.
Given that, can't you just use a for loop?
void ReadBytes(char * stream) {
for (int i = 0; i < sizeof(int); i++) {
char foo = stream[i];
}
}
}
Are you asking for something more complicated than that?
You need to worry about endianess only if the data you're reading is composed of numbers which are larger than one byte.
if you're reading sizeof(int) bytes and expect to interpret them as an int then endianess makes a difference. essentially endianness is the way in which a machine interprets a series of more than 1 bytes into a numerical value.
Just use a for loop that moves over the array in sizeof(int) chunks.
Use the function ntohl (found in the header <arpa/inet.h>, at least on Linux) to convert from bytes in the network order (network order is defined as big-endian) to local byte-order. That library function is implemented to perform the correct network-to-host conversion for whatever processor you're running on.
Why read when you can just compare?
bool AreEqual(int i, char *data)
{
return memcmp(&i, data, sizeof(int)) == 0;
}
If you are worrying about endianness when you need to convert all of integers to some invariant form. htonl and ntohl are good examples.