I use .data() to get 16 bytes data array.
Later I write it to a file and I want to load it back to a uuid variable.
Should I just perform memory copy to the variable as : (c++11)
boost::uuids::uuid uuid = boost::uuids::random_generator()();
char[16] data;
std::copy_n(&uuid, 16, data); // copy to data
std::copy_n(data, 16, &uuid); // copy from data (?)
First, whenever you find yourself wondering how to use Boost classes, there's the docs:
http://www.boost.org/doc/libs/1_58_0/libs/uuid/uuid.html
{ // example using memcpy
unsigned char uuid_data[16];
// fill uuid_data
boost::uuids::uuid u;
memcpy(&u, uuid_data, 16);
}
{ // example using aggregate initializers
boost::uuids::uuid u =
{ 0x12 ,0x34, 0x56, 0x78
, 0x90, 0xab
, 0xcd, 0xef
, 0x12, 0x34
, 0x56, 0x78, 0x90, 0xab, 0xcd, 0xef
};
}
Since memcpy works I expect copy_n will work also.
Related
I am using an Arm Cortex-M3 processor. I receive binary data in an unsigned char array, which must be cast into a suitable variable to be used for further computation:
unsigned char gps[24] = { 0xFA, 0x05, 0x08, 0x00, 0x10, 0x00,0xA4, 0x15, 0x02, 0x42, 0x4D, 0xDF, 0xEB, 0x3F, 0xF6, 0x1A, 0x36, 0xBE, 0xBF, 0x2D, 0xA4, 0x41,
0xAF, 0x1A };
int i = 6;
float f = (float) *(double*)&gps[i];
This code works on a computer to get the correct value of "f" but it fails on the Cortex-M3. I understand that does not have an arithmetic unit on the processor, hence doesn't support 64 bit operations; but is there a work-around to cast as shown above.
Note that the code below works on the processor; only the casting shown above fails:
double d = 9.7;
Also note that 32 bit casts work, as shown below; only double or uint64_t fail.
uint16_t k = *(uint16_t*)&gps[i];
Is there an alternative solution?
Casting the address of an unsigned char to a pointer to a double – and then using it – is violating strict aliasing rules; more importantly (in your case, as discussed below), it also breaks the required alignment rules for accessing multi-byte (i.e. double) data units.
Many compilers will warn about this; clang-cl gives the following for the (double*)&gps[i] expression:
warning : cast from 'unsigned char *' to 'double *' increases required
alignment from 1 to 8 [-Wcast-align]
Now, some architectures aren't too fussy about alignment of data types, and the code may (seem to) work on many of those. However, the Cortex-M3 is very fussy about the alignment requirements for multi-byte data types (such as double).
To remove undefined behaviour, you should use the memcpy function to transfer the component bytes of your gps array into a real double variable, then cast that to a float:
unsigned char gps[24] = {0xFA, 0x05, 0x08, 0x00, 0x10, 0x00,0xA4, 0x15, 0x02, 0x42, 0x4D,
0xDF, 0xEB, 0x3F, 0xF6, 0x1A, 0x36, 0xBE, 0xBF, 0x2D, 0xA4, 0x41, 0xAF, 0x1A };
int i = 6;
double d; // This will be correctly aligned by the compiler
memcpy(&d, &gps[i], sizeof(double)); // This doesn't need source alignment
float f = (float)d; // ... so now, this is a 'safe' cast down to single precision
The memcpy call will use (or generate) code that can access unaligned data safely – even if that means a significant slow-down of the access.
So there is a program that I am working on, that requires me to access data from a char array containing hex values. I have to use a function called func(), in this example, in order to do access the data structure. Func() contains 3 pointer variables, each of different types, and I can use any of them to access the data in the array. Whichever datatype I choose will affect what values will be stored to the pointer. Soo heres the code:
unsigned char data[]
{
0xBA, 0xDA, 0x69, 0x50,
0x33, 0xFF, 0x33, 0x40,
0x20, 0x10, 0x03, 0x30,
0x66, 0x03, 0x33, 0x40,
}
func()
{
unsigned char *ch;
unsigned int i*;
unsigned short* s;
unsigned int v;
s = (unsigned short*)&data[0];
v = s[6];
printf("val:0x%x \n",v);
}
Output:
Val:0x366
The problem with this output is that it should be 0x0366 with the zero in front of the 3, but it gets cut off at the printf statement, and I'm not allowed to modify that. How else could I fix this?
Use a format that specifies leading zeros: %04x.
Without changing the format passed to printf or replacing it entirely I'm afraid there's no way to affect the output.
How could insert text by argument and automatically transform it to hex?
I tried with:
unsigned char aesKey[32] = argv[1];
but get errors
The output would be like this:
unsigned char aesKey[32] = {
0x53, 0x28, 0x40, 0x6e, 0x2f, 0x64, 0x63, 0x5d, 0x2d, 0x61, 0x77, 0x40, 0x76, 0x71, 0x77, 0x28,
0x74, 0x61, 0x7d, 0x66, 0x61, 0x73, 0x3b, 0x5d, 0x66, 0x6d, 0x3c, 0x3f, 0x7b, 0x66, 0x72, 0x36
};
unsigned char *buf;
aes256_context ctx;
aes256_init(&ctx, aesKey);
for (unsigned long i = 0; i < lSize/16; i++) {
buf = text + (i * 16);
aes256_encrypt_ecb(&ctx, buf);
}
aes256_done(&ctx);
Thanks in advance
In C and C++, when you have code like
char name[]="John Smith";
The compiler knows at compile time what the size of that char array, and all the values will be. So it can allocate it on the stack frame and assign it the value.
When you have code like
char * strptr = foo();
char str[] = strptr;
The compiler doesn't know what the size and value of the string pointed by strptr is. That is why this is not allowed in C/C++.
In other words, only string literals can be assigned to char arrays, and that too only at the time of declaration.
So
char name[] = "John Smith";
is allowed.
char name[32];
name = "John Smith";
is not allowed.
Use memcpy
So you could use memcpy. (Or use c++ alternative that others have alluded to)
unsigned char *aesKey;
size_t len = (strlen(argv[1])+1)*sizeof(unsigned char);
aesKey = malloc(len);
memcpy(aesKey, argv[1], len);
The old solution
(here is my previous answer, the answer above is better)
So you need to use strncpy.
unsigned char aesKey[32];
strncpy((char *) aesKey, argv[1], 32);
Notice the routine is strncpy not strcpy. strcpy is unsafe. (Thanks PRouleau for the arg fix)
If strncpy is not available in Visual Studio then you may have to try strcpy_s (Thanks Google: user:427390)
In C/C++, the compiler does not automatically manipulate the arrays. You have to specify how to copy them.
The good old way is with memcpy(). A more modern way is with std::copy(). In any case, you have to validate the length of argv[1] before copying into aesKey.
For the conversion into hex, you probably have to transform a string like "AAEE3311" (up to 2*32 chars) into bytes. You should use std::istringstream and fill your aesKey position by position.
Ex:
std::istringstream Input(argv[1]);
Input >> std::hex >> aesKey[0];
I would imagine a program being called as below -
myprog 0x53 0x28 0x40 0x6e 0x2f 0x64 0x63
Inside the program I would have a loop to assign the arguments to the array -
const int size = 32;
unsigned char aesKey[size];
char* p;
for (int i = 1; i < argc || i < size; ++i)
{
aesKey[i] = (unsigned char)strtol(argv[i], &p, 16);
}
I am using MSVC++ 2010 Express, and I would love to know how to convert
BYTE Key[] = {0x50,0x61,0x73,0x73,0x77,0x6F,0x72,0x64};
to "Password" I am having a lot of trouble doing this. :( I will use this knowledge to take things such as...
BYTE Key[] { 0xC2, 0xB3, 0x72, 0x3C, 0xC6, 0xAE, 0xD9, 0xB5, 0x34, 0x3C, 0x53, 0xEE, 0x2F, 0x43, 0x67, 0xCE };
And other various variables and convert them accordingly.
Id like to end up with "Password" stored in a char.
Key is an array of bytes. If you want to store it in a string, for example, you should construct the string using its range constructor, that is:
string key_string(Key, Key + sizeof(Key)/sizeof(Key[0]));
Or if you can compile using C++11:
string key_string(begin(Key), end(Key));
To get a char* I'd go the C way and use strndup:
char* key_string = strndup(Key, sizeof(Key)/sizeof(Key[0]));
However, if you're using C++ I strongly suggest you use string instead of char* and only convert to char const* when absolutely necessary (e.g. when calling a C API). See here for good reasons to prefer std::string.
All you are lacking is a null terminator, so after doing this:
char Key_str[(sizeof Key)+1];
memcpy(Key_str,key,sizeof Key);
Key_str[sizeof Key] = '\0';
Key_str will be usable as a regular char * style string.
Can't figure out why I am getting seemingly random output from the Crypto++ RC2 decoder. The input is always the same, but the output is always different.
const char * cipher ("o4hk9p+a3+XlPg3qzrsq5PGhhYsn+7oP9R4j9Yh7hp08iMnNwZQnAUrZj6DWr37A4T+lEBDMo8wFlxliuZvrZ9tOXeaTR8/lUO6fXm6NQpa5P5aQmQLAsmu+eI4gaREvZWdS0LmFxn8+zkbgN/zN23x/sYqIzcHU");
int keylen (64);
unsigned char keyText[] = { 0x1a, 0x1d, 0xc9, 0x1c, 0x90, 0x73, 0x25, 0xc6, 0x92, 0x71, 0xdd, 0xf0, 0xc9, 0x44, 0xbc, 0x72, 0x00 };
std::string key((char*)keyText);
std::string data;
CryptoPP::RC2Decryption rc2(reinterpret_cast<const byte *>(key.c_str()), keylen);
CryptoPP::ECB_Mode_ExternalCipher::Decryption rc2Ecb(rc2);
CryptoPP::StringSource
( cipher
, true
, new CryptoPP::Base64Decoder
( new CryptoPP::StreamTransformationFilter
( rc2Ecb
, new CryptoPP::StringSink(data)
, CryptoPP::BlockPaddingSchemeDef::NO_PADDING
)
)
);
std::cout << data << '\n';
The parameters to the RC2::Decryption constructor are: (pointer to key-bytes, length of key-bytes). You are giving it a pointer to 16 bytes but using a length of 64 bytes. Crypto++ is reading uninitialized memory when reading the key, so you get random results.
If you want to indicate an effective key-length, you can use the other constructor like this:
CryptoPP::RC2Decryption rc2(keyText, 16, keylen);
Note that you should not use a std::string to hold your key. It is completely legal for a key to contain a 0x00-byte, and std::string is not designed to hold those.
RC2Decryption should have been defined as:
CryptoPP::RC2Decryption rc2(reinterpret_cast<const byte *>(key.c_str()), key.size(), keylen);