For example , 130ABF (Hexadecimal) is equals to 1247935 (Decimal),
So my byte array is
char buf[3] = {0x13 , 0x0A , 0xBF};
and I need to retrieve the decimal value from the byte array.
Below are my sample code:
#include<iostream>
using namespace std;
int main()
{
char buf[3] = {0x13 , 0x0A , 0xBF};
int number = buf[0]*0x10000 + buf[1]*0x100 + buf[2];
cout<<number<<endl;
return 0;
}
and the result is : (Wrong)
1247679
Unless I change the
char buf[3] = {0x13 , 0x0A , 0xBF};
to
int buf[3] = {0x13 , 0x0A , 0xBF};
then It will get correct result.
Unfortunately, I must set my array as char type, anyone know how to solve this ?
Define the array as:
unsigned char buf[3];
Remember that char could be signed.
UPDATE: In order to complete the answer, it is interesting to add that "char" is a type that could be equivalent to "signed char" or "unsigned char", but it is not determined by the standard.
Array elements will be promouted to int before evaluating. So if your compiler treats char as signed you get next (assuming int is 32-bit):
int number = 19*0x10000 + 10*0x100 + (-65);
To avoid such effect you can declare your array as unsigned char arr[], or use masking plus shifts:
int number = ((buf[0] << 16) & 0xff0000)
| ((buf[1] << 8) & 0x00ff00)
| ((buf[2] << 0) & 0x0000ff;
Since your char array is signed, when you want to initialize the last element (0xBF), you are trying to assign 191 to it while the max it can store is 127: a narrowing conversion occurs... A workaround would be the following:
unsigned char[3] = { 0x13, 0x0A, 0xBF };
This will prevent the narrowing conversion. Your compiler should have given you a warning about it.
Related
I have 8 bytes string with flags, some of them are booleans and some are chars. What I want is access that flags by it's names in my code, like myStruct.value1
I created some struct according to my wishes. I would expect I can split the string into that struct as both have size of 64 bits in total.
// destination
typedef struct myStruct_t {
uint8_t value1 : 8;
uint8_t value2 : 8;
uint16_t value3 : 16;
uint8_t value4 : 8;
uint8_t value5 : 1;
uint8_t value6 : 1;
uint8_t value7 : 1;
uint8_t value8 : 1;
uint8_t value9 : 1;
uint16_t value10 : 11;
uint8_t value11 : 8;
} myStruct_t;
// source
char buf[8] = "12345678";
// read about strcpy and memcpy but doesn't work
memcpy(myStruct, buf, 8);
However it does not work and i get following error message:
error: cannot convert 'myStruct_t' to 'void*' for argument '1' to 'void* memcpy(void*, const void*, size_t)'
memcpy(myStruct, buf, 8);
^
memcpy expects its first two arguments to be pointers.
Arrays like your buf will implicitly decay to pointers, but your type myStruct_t will not.
myStruct_t myStruct;
memcpy(&myStruct, buf, 8);
// ^ produces a POINTER to myStruct
If I understand correctly what you are trying to do, I would first convert the 8 character buffer to binary. Then, you can extract substrings from it for the length of each of the values you want. Finally, you can convert the binary strings to their numerical values.
Also, you should make your char array size 9. You need an extra character for the null terminator. The way you have it currently won't compile.
I'm currently working on a program that converts to and from base64 in Eclipse. However, I've just noticed that char values seem to have 7 bits instead of the usual 8. For example, the character 'o' is shown to be represented in binary as 1101111 instead of 01101111, which effectively prevents me from completing my project, as I need a total of 24 bits to work with for the conversion to work. Is there any way to either append a 0 to the beginning of the value (i tried bitshifting in both directions, but neither worked), or preventing the issue altogether?
The code for the (incomplete/nonfuntional) offending method is as follows, let me know if more is required:
std::string Encoder::encode( char* src, unsigned char* dest)
{
char ch0 = src[0];
char ch1 = src[1];
char ch2 = src[2];
char sixBit1 = ch0 >> 1;
dest[0] = ch2;
dest[1] = ch1;
dest[2] = ch0;
dest[3] = '-';
}
char for C/C++ language is always signed int8. So, it is excepted that you have only 7 useable bits - because one bit is used for sign storage.
Try to use unsigned char instead.
Either unsigned char or uint8_t from <stdint.h> should work. For maximum portability, uint_least8_t is guaranteed to exist.
I've got two unsigned char arrays, and a char array. I want to xor both unsigned char arrays and then join a char array.
char mensaje[] = "A";
unsigned char key[] = "61181d3cfd91b0cc0890c2c0646c94f692b311ffbf93749c0aadd8ae6f04f044";
test(key, mensaje)
void test(unsigned char key[],char mensaje[]){
unsigned char pad_exterior[64];
unsigned char pad_interior[64];
for(int i =0 ; i<64; i++ ){
pad_exterior[i]= 0x5c ^ key[i];
pad_interior[i]= 0x36 ^ key[i];
}
char * result = new char[strlen(mensaje)+ 64];
copy(mensaje, mensaje + strlen(mensaje), result);
copy(pad_interior, pad_interior + 64, result + strlen(mensaje));
char * result2 = new char[strlen(mensaje)+ 64];
copy(mensaje, mensaje + strlen(mensaje), result2);
copy(pad_exterior, pad_exterior + 64, result2 + strlen(mensaje));
}
The problem is, at the end, strlen(pad_exterior) = 65 but strlen(pad_interior) =1.
However, if I replace
pad_interior[i]= 0x36 ^ key[i];
with
pad_interior[i]= 0x36;
it does work.
Why this odd behavior?
Is there a better way to accomplish what I am trying to do? I've tried over a dozen ways to copy the arrays.
Edit:
I figured I needed char arrays because I am calling EVP_DigestUpdate.
The test function basically just xor and joins the arrays.
The call is at the beginning of the code
strlen is designed to work with null terminated char arrays. It uses the \0 terminator to actually find the length.
So if 0x36 ^ key[i] is 0, your pad_interior becomes null terminated too early.
And if 0x5c ^ key[i] is never zero your pad_exterior is not null terminated. And you run into UB when you do strlen(pad_exterior). You were just a little lucky that it returned 65.
I am reading in binary data from a file:
char* buffIn = new char[8];
ifstream inFile(path, ifstream::binary);
inFile.read(buffIn, 8);
I then want to convert the char* read in (as binary) to an unsigned long but I am having problems - I am not quite sure what is going on, but for instance 0x00000000000ACD gets interpreted as 0xFFFFFFFFFFFFCD - I suspect all the 0x00 bytes are causing some sort of problem when converting from char* to unsigned long...
unsigned long number = *(buffIn);
How do I do this properly?
Since buffIn is of type char pointer, when you do *(buffIn) you are just grabbing one character. You have to reinterpret the memory address as an unsigned long pointer and then dereference it.
unsigned long number = *((unsigned long*)buffIn);
In addition to recasting the char[8] (which will only read the the first unsigned long - which is 32-bits in length), you can also use some simple bit-wise operations
unsigned long value = (((unsigned long)buffin[0]) << 24) | (((unsigned long)buffin[1]) << 16) | (((unsigned long)buffin[2]) << 8) | (unsigned long)buffin[3];
Try something like
unsigned long* buffInL = new unsigned long[2];
char* buffIn=(char*)buffInL;
ifstream inFile(path, ifstream::binary);
inFile.read(buffIn, 8);
Unlike other types, char* is allowed to alias.
I ran into a very strange problem. I think I am missing some very basic thing here. When I do this:
char buffer[1] = {0xA0};
int value=0;
value = (int)buffer[0];
printf("Array : %d\n",value);
I get result as -96, which shouldnt happen. It should give me 160, as hexa number 0xA0 means 160 in decimal. When I put small values in buffer like 0x1F, it works fine.
Can anyone tell me what am I missing here?
char is signed -128 to 127
Declare buffer as unsigned char or cast to unsigned char:
char buffer[1] = {0xA0};
int value=0;
value = (unsigned char)buffer[0];
printf("Array : %d\n",value);