I scan through the byte representation of an int variable and get somewhat unexpected result.
If I do
int a = 127;
cout << (unsigned int) *((char *)&a);
I get 127 as expected. If I do
int a = 256;
cout << (unsigned int) *((char *)&a + 1);
I get 1 as expected. But if I do
int a = 128;
cout << (unsigned int) *((char *)&a);
I have 4294967168 which is, well… quite fancy.
The question is: is there a way to get 128 when looking at first byte of an int variable which value is 128?
For the same reason that (unsigned int)(char)128 is 4294967168: char is signed by default on most commonly used systems. 128 cannot fit in a signed 8-bit quantity, so when you cast it to char, you get -128 (0x80 in hex).
Then, when you cast -128 to an unsigned int, you get 232 - 128, which is 4294967168.
If you want to get +128, then use an unsigned char instead of char.
char is signed here, so in your second example, *((char *)&a + 1) = ((char)256 +1) = (0+1) = 1, which is encoded as 0b00000000000000000000000000000001, so becomes 1 as an unsigned int.
In your third example, *((char *)&a) = (char)128 = (char)-127, which is encoded as 0b10000000000000000000000000000000, i.e., 2<<31, which is 4294967168
As the comments have pointed out, it looks like what's happening here is that you are running into an oddity of twos complement. In your last cast, since you are not using an unsigned char, the highest-order bit of the byte is being used to indicate positive or negative values. You then only have 7 bits out of the full 8 to represent your value, giving you a range of 0-127 for positive numbers (-128-127 overall).
If you exceed this range, then it wraps, and you get -128, which when casted back to an unsigned int will result in that abnormally large value.
int a = 128;
cout << (unsigned int) *((unsigned char *)&a);
Also all of your code is dependent on running on a little endian machine.
Here's how you should probably be doing these things:
int a = 127;
cout << (unsigned)(unsigned char)(0xFF & a);
int a = 256;
cout << (unsigned)(unsigned char)(0xFF & (a>>8));
int a = 128;
cout << (unsigned)(unsigned char)(0xFF & a);
Related
I am new in C++ programming. I am trying to implement a code through which I can make a single integer value from 6 or more individual bytes.
I have Implemented same for 4 bytes and it's working
My Code for 4 bytes:
char *command = "\x42\xa0\x82\xa1\x21\x22";
__int64 value;
value = (__int64)(((unsigned char)command[2] << 24) + ((unsigned char)command[3] << 16) + ((unsigned char)command[4] << 8) + (unsigned char)command[5]);
printf("%x %x %x %x %x",command[2], command[3], command[4], command[5], value);
Using this Code the value of value is 82a12122 but when I try to do for 6 byte then the result was is wrong.
Code for 6 Bytes:
char *command = "\x42\xa0\x82\xa1\x21\x22";
__int64 value;
value = (__int64)(((unsigned char)command[0] << 40) + ((unsigned char)command[1] << 32) + ((unsigned char)command[2] << 24) + ((unsigned char)command[3] << 16) + ((unsigned char)command[4] << 8) + (unsigned char)command[5]);
printf("%x %x %x %x %x %x %x", command[0], command[1], command[2], command[3], command[4], command[5], value);
The output value of value is 82a163c2 which is wrong, I need 42a082a12122.
So can anyone tell me how to get the expected output and what is wrong with the 6 Byte Code.
Thanks in Advance.
Just cast each byte to a sufficiently large unsigned type before shifting. Even after integral promotions (to unsigned int), the type is not large enough to shift by more than 32 bytes (in the usual case, which seems to apply to you).
See here for demonstration: https://godbolt.org/g/x855XH
unsigned long long large_ok(char x)
{
return ((unsigned long long)x) << 63;
}
unsigned long long large_incorrect(char x)
{
return ((unsigned long long)x) << 64;
}
unsigned long long still_ok(char x)
{
return ((unsigned char)x) << 31;
}
unsigned long long incorrect(char x)
{
return ((unsigned char)x) << 32;
}
In simpler terms:
The shift operators promote their operands to int/unsigned int automatically. This is why your four byte version works: unsigned int is large enough for all your shifts. However, (in your implementation, as in most common ones) it can only hold 32 bits, and the compiler will not automatically choose a 64 bit type if you shift by more than 32 bits (that would be impossible for the compiler to know).
If you use large enough integral types for the shift operands, the shift will have the larger type as the result and the shifts will do what you expect.
If you turn on warnings, your compiler will probably also complain to you that you are shifting by more bits than the type has and thus always getting zero (see demonstration).
(The bit counts mentioned are of course implementation defined.)
A final note: Types beginning with double underscores (__) or underscore + capital letter are reserved for the implementation - using them is not technically "safe". Modern C++ provides you with types such as uint64_t that should have the stated number of bits - use those instead.
Your shift overflows bytes, and you are not printing the integers correctly.
This code is working:
(Take note of the print format and how the shifts are done in uint64_t)
#include <stdio.h>
#include <cstdint>
int main()
{
const unsigned char *command = (const unsigned char *)"\x42\xa0\x82\xa1\x21\x22";
uint64_t value=0;
for (int i=0; i<6; i++)
{
value <<= 8;
value += command[i];
}
printf("%x %x %x %x %x %x %llx",
command[0], command[1], command[2], command[3], command[4], command[5], value);
}
I need to write a code which encrypt a char with a special key.
I must use own key and mask with XOR operation.
But I have a problem with understanding and implementation of this.
Firstly, I created my own key which I use to encrypt a char. This key is e.q number: 123456789. In binary representation number 123456789 is: 00010101 11001101 01011011 00000111. I divided this to 4 x 8 bits because I enter a type of char.
I must use this to encrypt my char with a mask too. My mask is 0xFF because it resets the oldest bits and the youngest 8 bits are left to operate with XOR operation.
It means when I enter char "a" it should encrypt this with this key and a mask with XOR operation.
What's more, I wanted to check what my compilator shows with key[0] position. It means I have "int key[0] = {00000111}" As I think it should show a number of 7 as a binary number, but compilator shows number 73. Why?
I would appreciate if somebody can help me to resolve this problem.
Here is my code:
#include <iostream>
using namespace std;
void encryption(char chars[], const int size);
int main() {
const int size1 = 4;
char chars1[size1];
unsigned int keys = 123456789;
int key[] = {00000111}; // why does it show number 73 instead of 7 ?
cout << "Enter a char to encrypt: " << endl;
cin >> chars1[0];
return 0; }
void encryption(char chars[], const int size) {
unsigned int keys = 123456789;
unsigned int key[] = {00010101, 11001101, 01011011, 00000111};
unsigned int mask = 0xFF;
int temp[4] = {0};
temp[0] = chars[0] ^ (keys & mask);
temp[1] = chars[0] ^ ((keys >> 8) & mask);
temp[2] = chars[0] ^ ((keys >> 16) & mask);
temp[3] = chars[0] ^ ((keys >> 24) & mask);
}
There are several problems:
unsigned int keys = 123456789 is in hex 075bcd15
00010101, 11001101, 01011011, 00000111 is in hex 15cd5b07
While both are 4-bytes notice the reverse order of the bytes, that is because the computer used what is known as little-endian byte order for integers. So if you get the bytes by casting to an unsigned character you will get the reversed byte order from what you expect.
unsigned int key[] is an array of int which seem to be 32-bits (4-bytes).
For a single unsigned int use
unsigned int keyBytes3 = {0x075bcd15};
For an array of 4 unsigned char:
unsigned char keyBytes[] = {0x07, 0x5b, 0xcd, 0x15};
Personally for instances where size matters I prefer using uint32_t and uint8_t types, then it is clear of the sizes, int may be larger or smaller the 32-bits depending on the CPU.
The problem is that the compiler does not interpret your 00000111as binary. C++ 11 does not support binary literals. In addition, any number that starts with 0 (aka 00000111) as the most-significant digit is considered to be in octal. In octal, 111 is the equivalent of 73 in decimal.
You'll need to convert all of your "binary values" to decimal, then use those. Or you could use boost, which has a utility that handles what you're looking to do.
I have the following simple program that uses a union to convert between a 64 bit integer and its corresponding byte array:
union u
{
uint64_t ui;
char c[sizeof(uint64_t)];
};
int main(int argc, char *argv[])
{
u test;
test.ui = 0x0123456789abcdefLL;
for(unsigned int idx = 0; idx < sizeof(uint64_t); idx++)
{
cout << "test.c[" << idx << "] = 0x" << hex << +test.c[idx] << endl;
}
return 0;
}
What I would expect as output is:
test.c[0] = 0xef
test.c[1] = 0xcd
test.c[2] = 0xab
test.c[3] = 0x89
test.c[4] = 0x67
test.c[5] = 0x45
test.c[6] = 0x23
test.c[7] = 0x1
But what I actually get is:
test.c[0] = 0xffffffef
test.c[1] = 0xffffffcd
test.c[2] = 0xffffffab
test.c[3] = 0xffffff89
test.c[4] = 0x67
test.c[5] = 0x45
test.c[6] = 0x23
test.c[7] = 0x1
I'm seeing this on Ubuntu LTS 14.04 with GCC.
I've been trying to get my head around this for some time now. Why are the first 4 elements of the char array displayed as 32 bit integers, with 0xffffff prepended to them? And why only the first 4, why not all of them?
Interestingly enough, when I use the array to write to a stream (which was the original purpose of the whole thing), the correct values are written. But comparing the array char by char obviously leads to problems, since the first 4 chars are not equal 0xef, 0xcd, and so on.
Using char is not the right thing to do since it could be signed or unsigned. Use unsigned char.
union u
{
uint64_t ui;
unsigned char c[sizeof(uint64_t)];
};
char gets promoted to an int because of the prepended unary + operator. . Since your chars are signed, any element with the highest by set to 1 is interpreted as a negative number and promoted to an integer with the same negative value. There are a few different ways to solve this:
Drop the +: ... << test.c[idx] << .... This may print the char as a character rather than a number, so is probably not a good solution.
Declare c as unsigned char. This will promote it to an unsigned int.
Explicitly cast +test.c[idx] before it is passed: ... << (unsigned char)(+test.c[idx]) << ...
Set the upper bytes of the integer to zero using binary &: ... << +test.c[idx] & 0xFF << .... This will only display the lowest-order byte no matter how the char is promoted.
Use either unsigned char or use test.c[idx] & 0xff to avoid sign extension when a char value > 0x7f is converted to int.
It is unsigned char vs signed char and its casting to integer
The unary plus causes the char to be promoted to a int (integral promotion). Because you have signed chars the value will be used as such and the other bytes will reflect that.
It is not true that only the four are ints, they all are. You just don't see it from the representtion since the leading zeroes are not shown.
Either use unsigned chars or & 0xff for promotion to get the desired result.
I need to convert integer value into char array on bit layer. Let's say int has 4 bytes and I need to split it into 4 chunks of length 1 byte as char array.
Example:
int a = 22445;
// this is in binary 00000000 00000000 1010111 10101101
...
//and the result I expect
char b[4];
b[0] = 0; //first chunk
b[1] = 0; //second chunk
b[2] = 87; //third chunk - in binary 1010111
b[3] = 173; //fourth chunk - 10101101
I need this conversion make really fast, if possible without any loops (some tricks with bit operations perhaps). The goal is thousands of such conversions in one second.
I'm not sure if I recommend this, but you can #include <stddef.h> and <sys/types.h> and write:
*(u32_t *)b = htonl((u32_t)a);
(The htonl is to ensure that the integer is in big-endian order before you store it.)
int a = 22445;
char *b = (char *)&a;
char b2 = *(b+2); // = 87
char b3 = *(b+3); // = 173
Depending on how you want negative numbers represented, you can simply convert to unsigned and then use masks and shifts:
unsigned char b[4];
unsigned ua = a;
b[0] = (ua >> 24) & 0xff;
b[1] = (ua >> 16) & 0xff;
b[2] = (ua >> 8) & 0xff
b[3] = ua & 0xff;
(Due to the C rules for converting negative numbers to unsigned, this will produce the twos complement representation for negative numbers, which is almost certainly what you want).
To access the binary representation of any type, you can cast a pointer to a char-pointer:
T x; // anything at all!
// In C++
unsigned char const * const p = reinterpret_cast<unsigned char const *>(&x);
/* In C */
unsigned char const * const p = (unsigned char const *)(&x);
// Example usage:
for (std::size_t i = 0; i != sizeof(T); ++i)
std::printf("Byte %u is 0x%02X.\n", p[i]);
That is, you can treat p as the pointer to the first element of an array unsigned char[sizeof(T)]. (In your case, T = int.)
I used unsigned char here so that you don't get any sign extension problems when printing the binary value (e.g. through printf in my example). If you want to write the data to a file, you'd use char instead.
You have already accepted an answer, but I will still give mine, which might suit you better (or the same...). This is what I tested with:
int a[3] = {22445, 13, 1208132};
for (int i = 0; i < 3; i++)
{
unsigned char * c = (unsigned char *)&a[i];
cout << (unsigned int)c[0] << endl;
cout << (unsigned int)c[1] << endl;
cout << (unsigned int)c[2] << endl;
cout << (unsigned int)c[3] << endl;
cout << "---" << endl;
}
...and it works for me. Now I know you requested a char array, but this is equivalent. You also requested that c[0] == 0, c[1] == 0, c[2] == 87, c[3] == 173 for the first case, here the order is reversed.
Basically, you use the SAME value, you only access it differently.
Why haven't I used htonl(), you might ask?
Well since performance is an issue, I think you're better off not using it because it seems like a waste of (precious?) cycles to call a function which ensures that bytes will be in some order, when they could have been in that order already on some systems, and when you could have modified your code to use a different order if that was not the case.
So instead, you could have checked the order before, and then used different loops (more code, but improved performance) based on what the result of the test was.
Also, if you don't know if your system uses a 2 or 4 byte int, you could check that before, and again use different loops based on the result.
Point is: you will have more code, but you will not waste cycles in a critical area, which is inside the loop.
If you still have performance issues, you could unroll the loop (duplicate code inside the loop, and reduce loop counts) as this will also save you a couple of cycles.
Note that using c[0], c[1] etc.. is equivalent to *(c), *(c+1) as far as C++ is concerned.
typedef union{
byte intAsBytes[4];
int int32;
}U_INTtoBYTE;
I have a problem with bit shifting and unsigned longs. Here's my test code:
char header[4];
header[0] = 0x80;
header[1] = 0x00;
header[2] = 0x00;
header[3] = 0x00;
unsigned long l1 = 0x80000000UL;
unsigned long l2 = ((unsigned long) header[0] << 24) + ((unsigned long) header[1] << 16) + ((unsigned long) header[2] << 8) + (unsigned long) header[3];
cout << l1 << endl;
cout << l2 << endl;
I would expect l2 to also have a value of 2147483648 but instead it prints 18446744071562067968. I assume the bit shifting of the first byte causes problems?
Hopefully somebody can explain why this fails and how I modify the calculation of l2 so that it returns the correct value.
Thanks in advance.
Your value of 0x80 stored in a char is a signed quantity. When you cast this into a wider type, the value is being signed extended to keep the same value as a larger type.
Change the type of char in the first line to unsigned char and you will not get the sign extension happening.
To simplify what is happening in your case, run this:
char c = 0x80
unsigned long l = c
cout << l << endl;
You get this output:
18446744073709551488
which is -128 as a 64-bit integer (0x80 is -128 as a 8-bit integer).
Same result here (Linux/x86-64, GCC 4.4.5). The behavior depends on the size of unsigned long, which is at least 32 bits, but may be larger.
If you want exactly 32 bits, use a uint32_t instead (from the header <stdint.h>; not in C++03 but in the upcoming standard and widely supported).