I the project I have a struct that has one member of type unsigned int array(uint8_t) like below
typedef uint8_t U8;
typedef struct {
/* other members */
U8 Data[8];
} Frame;
a pointer to a variable of type Frame is received that during debug I see it as below in console of VS2017
/* the function signatur */
void converter(Frame* frm){...}
frm->Data 0x20f1feb0 "6þx}\x1òà... unsigned char[8] // in debug console
now I want to assign it to an 8byte string
I did it like below, but it concatenates the numeric values of the array and results in something like "541951901201251242224"
std::string temp;
for (unsigned char i : frm->Data)
{
temp += std::to_string(i);
}
also tried const std::string temp(reinterpret_cast<char*>(frm->Data, 8)); which throws exception
In your original cast const std::string temp(reinterpret_cast<char*>(frm->Data, 8)); you put the closing parenthesis in the wrong place, so that it ends up doing reinterpret_cast<char*>(8) and that is the cause of the crash.
Fix:
std::string temp(reinterpret_cast<char const*>(frm->Data), sizeof frm->Data);
Just leave away the std::to_string. It converts numeric values to their string representation. So even if you give it a char, it will just cast that to an integer and convert it to the numerical representation of that integer instead. On the other hand, just adding a char to an std::string using += works fine. Try this:
int main() {
typedef uint8_t U8;
U8 Data[] = { 0x48, 0x65, 0x6C, 0x6C, 0x6F };
std::string temp;
for (unsigned char i : Data)
{
temp += i;
}
std::cout << temp << std::endl;
}
See here for more information and examples on std::string's += operator.
Related
When trying to define the char:
char q = '§';
clion throws an error: "Character too large for enclosing character literal type". This is weird as if I look up the ascii conversion of § it is just 167.
If I use:
char c;
std::string q = "§";
for (char el:q) {
c = el;
std::cout << c;
}
the output reads: §
and:
int c;
std::string q = "§";
for (char el:q) {
c = (int) el;
std::cout << c;
}
outputs: -62-89
So it seems that the character overflows the char type
I am implenting RSA encryption using unsinged long long int instead of int in this case and the overflow still occurs which corrupts the decrypted data. How can I convert this character and potentially others that may overflow the char type into their respective ascii value (for this example (char)'§' should return 167).
conversion with unsigned long long int:
#define ull unsigned long long int
int main() {
ull c;
std::string q = "§";
for (char el:q) {
c = (ull) el;
std::cout << c;
}
}
output: 1844674407370955155418446744073709551527
using wchar_t also did not fix the issue.
One way to go around it is to use unicode string:
auto q = u"\u00A7";
Unicode strings (u for 16-bit and U for 32-bit) can in general be used similarly to normal std::string type but when you iterate over it or index into it, you'll have the corresponding character type: char16_t or char32_t.
I have this function that in the first call, is giving me back the Correct Encrypted Value
120692dbcdca656394fc10147e2418f2
But all that comes after are incorrect :
764e1a39b43c42f30da2e9e327d4ed22
b93b46dbc936ae3b06f571ffe1a59cac
b93b46dbc936ae3b06f571ffe1a59cac
a71787b35326e282f8c1bf3a0a034620
I'm new with C++ and I think it is a matter of initialization of one of the variables.
I did many Tests but I'm not able to point out where is the Error.
Could someone help me please ?
Thank You -
#include <openssl/aes.h>
#include <iostream>
#include <iomanip>
using namespace std;
char* pxEnAndDeCrypt(char* pStr )
{
static const unsigned char key[] = {0x00, 0x11, 0x22, 0x33, 0x44, 0x55};
char *ptr = NULL;
unsigned char enc_out[80]= {};
unsigned char dec_out[80]= {};
int i,j,lenHexa ;
char enc_out_HEXA[200]= {};
unsigned char enc_out_TRANSF[200]= {};
unsigned char enc_out_BACK[200]= {};
AES_KEY enc_key, dec_key;
// ENCRYPT : Input =*pStr Output = enc_out
AES_set_encrypt_key(key, 128, &enc_key);
AES_encrypt((const unsigned char *)pStr, enc_out, &enc_key);
// TRANSFORM OUTPUT OF ENCRYPT TO HEXA : Input =enc_out Output = enc_out_HEXA
int len = strlen((char*)enc_out);
for (i = 0, j = 0; i < len; ++i, j += 2)
{
sprintf(enc_out_HEXA + j, "%02x", enc_out[i] & 0xff);
}
ptr = (char *) enc_out_HEXA;
// OUTPUT
return ptr;
}
The problem with your code is that your function returns a pointer to enc_out_HEXA.
ptr = (char *) enc_out_HEXA;
return ptr;
The issue here is that enc_out_HEXA is declared inside pxEnAndDeCrypt so it no longer exists once you have exitted pxEnAndDeCrypt, so your function is returning a pointer to an object which no longer exists. This results in the strange behaviour you see.
Since you are programming C++, the simple solution is to use C++ (your current code is pure C). Instead of returning a pointer, return a std::string.
#include <string>
std::string pxEnAndDeCrypt(char* pStr )
{
...
return ptr;
}
There are many other places in the above code where you could replace the C code with C++. But this simple change should be enough to get over the current problem.
Of course you will also have to change the code that calls pxEnAndDeCrypt, but since you didn't post that I can't really help with that.
EDIT
Here's an alternative solution that doesn't require std::string.
The basic problem is that enc_out_HEXA has been declared inside the pxEnAndDeCrypt function and so you can't use it (or a pointer to it) outside the function. So one solution is to move the enc_out_HEXA to the calling function and pass a pointer to that array to the function. Like this
void pxEnAndDeCrypt(char* pStr, char* result)
{
...
for (i = 0, j = 0; i < len; ++i, j += 2)
{
sprintf(result + j, "%02x", enc_out[i] & 0xff);
}
}
Then somewhere else in your code you will have
char enc_out_HEXA[200];
pxEnAndDeCrypt(some_string, enc_out_HEXA);
That's the solution that would be used in a C program.
Suppose, I wanted to write decimal 31 in a binary file (which is already loaded into vector) in 4 bytes so I have to write as 00 00 00 1f, but I don't know how to convert decimal number in hex string (of 4 bytes)
So, expected hex in vector of unsigned char is:
0x00 0x00 0x00 0x1f // int value of this is 31
To do this I tried following:
std::stringstream stream;
stream << std::setfill('0') << std::setw(sizeof(int) * 2) << std::hex << 31;
cout << stream.str();
Output:
0000001f
Above line of code gives output in string format but I want it into vector of unsigned char in format of '0x', so my output vector should have elements after conversion as 0x00 0x00 0x00 0x1F.
Without bothering with endianness you could copy the int value into a character buffer of the appropriate size. This buffer could be the vector itself.
Perhaps something like this:
std::vector<uint8_t> int_to_vector(unsigned value)
{
// Create a vector of unsigned characters (bytes on a byte-oriented platform)
// The size will be set to the same size as the value type
std::vector<uint8_t> buffer(sizeof value);
// Do a byte-wise copy of the value into the vector data
std::memcpy(buffer.data(), &value, sizeof value);
return buffer;
}
The order of bytes in the vector will always in the host native order. If a specific order is mandated then each byte of the multi-byte value needs to be copied into a specific element of the array using bitwise operations (std::memcpy can't be used).
Also note that this function will break strict aliasing if uint8_t isn't an alias of unsigned char. And that uint8_t is an optional type, there are platforms which doesn't have 8-bit entities (though they are not common).
For an endianness-specific variant, where each value of a byte is extracted one by one and added to the vector, perhaps something like this:
std::vector<uint8_t> int_to_be_vector(unsigned value)
{
// Create a vector of unsigned characters (bytes on a byte-oriented platform)
// The size will be set to the same size as the value type
std::vector<uint8_t> buffer(sizeof value);
// For each byte in the multi-byte value, copy it to the "correct" place in the vector
for (size_t i = buffer.size(); i > 0; --i)
{
// The cast truncates the value, dropping all but the lowest eight bits
buffer[i - 1] = static_cast<uint8_t>(value);
value >>= 8;
}
return buffer;
}
Example of it working
You could use a loop to extract one byte at a time of the original number and store that in a vector.
#include <algorithm>
#include <cstdint>
#include <iostream>
#include <vector>
using u8 = std::uint8_t;
using u32 = std::uint32_t;
std::vector<u8> GetBytes(const u32 number) {
const u32 mask{0xFF};
u32 remaining{number};
std::vector<u8> result{};
while (remaining != 0u) {
const u32 bits{remaining & mask};
const u8 res{static_cast<u8>(bits)};
result.push_back(res);
remaining >>= 8u;
}
std::reverse(std::begin(result), std::end(result));
return result;
}
int main() {
const u32 myNumber{0xABC123};
const auto bytes{GetBytes(myNumber)};
std::cout << std::hex << std::showbase;
for (const auto b : bytes) {
std::cout << static_cast<u32>(b) << ' ';
}
std::cout << std::endl;
return 0;
}
The output of this program is:
0xab 0xc1 0x23
I am trying to store two integer value into an char array in C++.
Here is the code..
char data[20];
*data = static_cast <char> (time_delay); //time_delay is of int type
*(data + sizeof(int)) = static_cast<char> (wakeup_code); //wakeup_code is of int type
Now on the other end of the program, I want to reverse this operation. That is, from this char array, I need to obtain the values of time_delay and wakeup_code.
How can I do that??
Thanks,
Nick
P.S: I know this is a stupid way to do this, but trust me its a constraint.
I think when you write static_cast<char>, that value is converted to a 1-byte char, so if it didn't fit in a char to begin with, you'll lose data.
What I'd do is use *((int*)(data+sizeof(int))) and *((int*)(data+sizeof(int))) for both reading and writing ints to the array.
*((int*)(data+sizeof(int))) = wakeup_code;
....
wakeup_code = *((int*)(data+sizeof(int)));
Alternatively, you might also write:
reinterpret_cast<int*>(data)[0]=time_delay;
reinterpret_cast<int*>(data)[1]=wakeup_code;
If you are working on a PC x86 architecture then there are no alignment problems (except for speed) and you can cast a char * to an int * to do the conversions:
char data[20];
*((int *)data) = first_int;
*((int *)(data+sizeof(int))) = second_int;
and the same syntax can be used for reading from data by just swapping sides of =.
Note however that this code is not portable because there are architectures where an unaligned operation may be not just slow but actually illegal (crash).
In those cases probably the nicest approach (that also gives you endianness control in case data is part of a communication protocol between different systems) is to build the integers explicitly in code one char at a time:
first_uint = ((unsigned char)data[0] |
((unsigned char)data[1] << 8) |
((unsigned char)data[2] << 16) |
((unsigned char)data[3] << 24));
data[4] = second_uint & 255;
data[5] = (second_uint >> 8) & 255;
data[6] = (second_uint >> 16) & 255;
data[7] = (second_uint >> 24) & 255;
I haven't tried it, but the following should work:
char data[20];
int value;
memcpy(&value,data,sizeof(int));
Try the following:
union IntsToChars {
struct {
int time_delay;
int wakeup_value;
} Integers;
char Chars[20];
};
extern char* somebuffer;
void foo()
{
IntsToChars n2c;
n2c.Integers.time_delay = 1;
n2c.Integers.wakeup_value = 2;
memcpy(somebuffer,n2c.Chars,sizeof(n2c)); //an example of using the char array containing the integer data
//...
}
Using such union should eliminate the alignment problem, unless the data is passed to a machine with different architecture.
#include <sstream>
#include <string>
int main ( int argc, char **argv) {
char ch[10];
int i = 1234;
std::ostringstream oss;
oss << i;
strcpy(ch, oss.str().c_str());
int j = atoi(ch);
}
int x = 1231212;
memcpy(pDVal, &x, 4);
int iDSize = sizeof(double);
int i = 0;
for (; i<iDSize; i++)
{
char c;
memcpy(&c, &(pDVal[i]), 1);
printf("%d|\n", c);
printf("%x|\n", c);
}
I used above code segment to print the hex value of each byte of a Integer. But that is not working properly. What is issue here ?
Try something like this:
void Int32ToUInt8Arr( int32 val, uint8 *pBytes )
{
pBytes[0] = (uint8)val;
pBytes[1] = (uint8)(val >> 8);
pBytes[2] = (uint8)(val >> 16);
pBytes[3] = (uint8)(val >> 24);
}
or perhaps:
UInt32 arg = 18;
array<Byte>^byteArray = BitConverter::GetBytes( arg);
// {0x12, 0x00, 0x00, 0x00 }
byteArray->Reverse(byteArray);
// { 0x00, 0x00, 0x00, 0x12 }
for the second example see: http://msdn2.microsoft.com/en-us/library/de8fssa4(VS.80).aspx
Hope this helps.
Just use the sprintf function. You will get a char*, so you have your array.
See the example on the webpage
Your code looks awful. That's it.
memcpy(pDVal, &x, 4);
What is pDVal? Why do you use 4? Is it sizeof(int)?
int iDSize = sizeof(double);
Why sizeof(double)? May be you need sizeof(int).
memcpy(&c, &(pDVal[i]), 1); makes copy first byte of i-th array pDVal element.
printf("%d|\n", c); is not working because "%d" is waiting integer.
Print like this:
printf("%d|\n", c & 0xff);
printf("%x|\n", c & 0xff);
If you are serious about the c++, this is how I would suggest to do it.
#include <sstream>
template <typename Int>
std::string intToStr(Int const i) {
std::stringstream stream;
stream << std::hex << i;
return stream.str();
}
Which you may invoke as intToStr(1231212). If you insist on getting a char array (I strongly suggest you use std::string), you can copy the c_str() result over:
std::string const str = intToStr(1231212);
char* const chrs = new char[str.length()+1];
strcpy(chrs,str.c_str()); // needs <string.h>