I am required to add two select only two numbers from a Variable ie 0X11223344 and I want my pointers to pick 22 in the middle of the array. how do I go about it
You can use shift and modulo operations to get the value
int main(){
return (0X11223344 >> 16) % 256;
}
The program returns 34 == 0x22
A right shift of 4 removes 1 digit. A right shift of 16 removes 4 digits. A modulo of 16 removes all but one digits. A modulo of 16*16= 256 removes all but 2 digits.
You can also get the value with pointer operations:
int main() {
int endianness = 2;
int a = 0x11223344;
char *b = ((char *) &a) + endianness;
return *b;
}
The value of endianess is implementation defined. On a system with little endiannes it's 2
|01 02 03 04| memory address
-------------
|44 33 22 11| 4 byte int with address 01 and value 0x11223344
| | |22| | 1 byte char with address 03 and value 0x22
and on a system with big endianness it's 1
|01 02 03 04| memory address
-------------
|11 22 33 44| 4 byte int with address 01 and value 0x11223344
| |22| | | 1 byte char with address 02 and value 0x22
Related
I have a framework which uses 16 bit floats, and I wanted to separate its components to then use for 32bit floats. In my first approach I used bit shifts and similar, and while that worked, it was wildly chaotic to read.
I then wanted to use custom bit sized structs instead, and use a union to write to that struct.
The code to reproduce the issue:
#include <iostream>
#include <stdint.h>
union float16_and_int16
{
struct
{
uint16_t Mantissa : 10;
uint16_t Exponent : 5;
uint16_t Sign : 1;
} Components;
uint16_t bitMask;
};
int main()
{
uint16_t input = 0x153F;
float16_and_int16 result;
result.bitMask = input;
printf("Mantissa: %#010x\n", result.Components.Mantissa);
printf("Exponent: %#010x\n", result.Components.Exponent);
printf("Sign: %#010x\n", result.Components.Sign);
return 0;
}
In the example I would expect my Mantissa to be 0x00000054, the exponent to be 0x0000001F, and sign 0x00000001
Instead I get Mantissa: 0x0000013f, Exponent: 0x00000005, Sign: 0x00000000
Which means that from my bit mask first the Sign was taken (first bit), next 5 bits to exponent, then 10 bit to mantissa, so the order is inverse of what I wanted. Why is that happening?
The worse part is that a different compiler could give the expected order. The standard has never specified the implementation details for bitfields, and specifically the order. The rationale being as usual that it is an implementation detail and that programmers should not rely nor depend on that.
The downside is that it is not possible to use bitfields in cross language programs, and that programmers cannot use bitfields for processing data having well known bitfields (for example in network protocol headers) because it is too complex to make sure how the implementation will process them.
For that reason I have always thought that it was just an unuseable feature and I only use bitmask on unsigned types instead of bitfields. But that last part is no more than my own opinion...
I would say your input is incorrect, for this compiler anyway. This is what the float16_and_int16 order looks like.
sign exponent mantissa
[15] [14:10] [9:0]
or
SGN | E X P O N E N T| M A N T I S S A |
15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
if input = 0x153F then bitMask ==
SGN | E X P O N E N T| M A N T I S S A |
15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
0 0 0 1 0 1 0 1 0 0 1 1 1 1 1 1
so
MANTISSA == 0100111111 (0x13F)
EXPONENT == 00101 (0x5)
SIGN == 0 (0x0)
If you want mantissa to be 0x54, exponent 0x1f and sign 0x1 you need
SGN | E X P O N E N T| M A N T I S S A |
15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
1 1 1 1 1 1 0 0 0 1 0 1 0 1 0 0
or
input = 0xFC64
I have two seperate bitfields that make up a "Identity" field that are 11 + 18 bits in length (29 bits total).
In the bitfield they are of the expected size:
header a;
memset(a.arr, 0, sizeof(a.arr));
a = {0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0}; // 1010 0000
cout << hex << a.BID << endl; // 010 0000 1010 -> 20a
cout << hex << a.IDEX << endl; // 00 1010 0000 1010 0000 -> a0a0
and what I need to do is combine these fields into a 29-bit segment, e.g. 010 0000 1010 00 1010 0000 1010 0000.
When attempting to concatenate the two bitfields however the result is not what I expect:
int BID = a.BID;
int IDEX = a.IDEX;
int result = (BID<<11) | IDEX;
cout << BID << endl;
printf("%x %d",result, result); // -> 10f0a0 (21 bits) where I expect 828A0A0 (29 bits)
It's important for me to have all 29 bits as within this 29-bit field there's various subfields and I was going to take this output and put it through another bit-field to resolve those subfields.
Would you be able to assist in how I could combine BID and IDEX mentioned above into one combined bitfield of 29 bits? Unfortunately they have two bits inbetween the BID and IDEX fields another in the header that are ignored which is why I cannot just set my bitfield to 29 bits.
You should shift 18 bits first and then do the OR. For example:
int result = (BID<<18) | IDEX;
Otherwise you are overwriting the first block. What you are doing here is shifting 11 bits and then ORing with 18 bits which corrupts the first 11 bits indeed.
This was given as a past question in an exam but i'm unable to understand the result that is obtained of the last 4 printf functions. I get the conversion to hexadecimal for the first 2 but i don't really see how there are characters at
ptr[0] to ptr[3]
This is the section of code that was compiled and run.
int main(int argc, char *argv[]){
typedef unsigned char byte;
unsigned int nines = 999;
byte * ptr = (byte *) &nines;
printf ("%x\n",nines);
printf ("%x\n",nines * 0x10);
printf ("%d\n",ptr[0]);
printf ("%d\n",ptr[1]);
printf ("%d\n",ptr[2]);
printf ("%d\n",ptr[3]);
return EXIT_SUCCESS;
}
and this was the corresponding output
3e7
3e70
231
3
0
0
When you do byte * ptr = (byte *) &nines; you set the address of ptr to be the same address of nines. This has a value of 999 and in hex is 0x3e7
From the problem, I am assuming that an int has 4 bytes and this is a little endian system. i.e. bytes are stored like this.
---------------------------------
| 0xe7 | 0x03 | 0x00 | 0x00 |
---------------------------------
ptr ptr+1 ptr+2 ptr+3
So when you print them out, you get the values of 231, 3, 0 and 0 (231 is equal to 0xe7)
In the little endian system, followed by intel processors and most microcontrollers today, the least significant byte is stored first and the most significant byte is stored last.
On the other hand, we have the big endian system, followed by some older Motorola controllers and power PC's. In this the most significant byte is stored first. The output in those systems would be 0, 0, 3 and 231.
This code is platform-dependent.
Given that your platform is:
Little Endian
CHAR_BIT == 8
sizeof(int) == 4
The binary representation of 999 in memory is 11100111 00000011 00000000 00000000.
Hence the decimal representation of 999 in memory is 231 3 0 0.
As a side-note, you should bring it to the attention of your instructor at school/college/university, that since this code is platform-dependent, it is a very bad example to be given as part of an exam.
If you have an exam like this, I suggest you to change lecturer as soon as possible.
The representation of unsigned int is implementation specified, it depends on your machine for its size, endianness.
Anyway, casting from a unsigned int* to char*then read it value directly should be an undefined behavior.
In little endian like x86 machine, your unsigned int of 999 is represented as:
| 0xE7 | 0x03 | 0x00 | 0x00 |
-----------------------------
ptr ptr+1 ptr+2 ptr+3
with number between | is the value in that byte. Hence, it will be printed as:
231 3 0 0
On another machine, let's say a 32 bit, Big Endian (e.g Atmel AVR32), it will be represented as:
| 0x00 | 0x00 | 0x03 | 0xE7 |
-----------------------------
ptr ptr+1 ptr+2 ptr+3
then it will print:
0 0 3 231
In another machine, let's say a 32 bit, middle endian, it will be represented as:
| 0x03 | 0xE7 | 0x00 | 0xE0 |
-----------------------------
ptr ptr+1 ptr+2 ptr+3
then it will print:
3 231 0 0
In the older machine, let's say a 16 bit little endian machine, it is represented as:
| 0xE7 | 0x03 | xx| xx |
------------------------
ptr ptr+1 ptr+2 ptr+3
with xx is unspecified value, there is another undefined behavior.
In a 64 bit big endian machine, it is represented as:
| 0x00| 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x03 | 0xE7
-----------------------------
ptr ptr+1 ptr+2 ptr+3
it will print:
0 0 0 0
That's said, there's no exact answer for exam's question. And if yes, it still invokes undefined behavior.
Further reading about Endianness, undefined behavior
This code displays the values of each individual byte of the (assumed to be 32-bit) number nines.
nines's value is 999 in decimal, 3E7 in hexadecimal, and according to the values printed, it's stored in little-endian byte order (the "least significant" byte comes first).
It's easier to see if you convert the values to hexadecimal as well:
printf ("%x\n",ptr[0]);
printf ("%x\n",ptr[1]);
printf ("%x\n",ptr[2]);
printf ("%x\n",ptr[3]);
Which will display this:
E7
3
0
0
Also, you could interpret the decimal values this way:
231 + 3*256 + 0*65536 + 0*16777216 = 999
nines is an unsigned 32bit integer on the stack (note that it is possible for int to be 64bit wide, but it does not seem to be the case here).
ptr is a pointer, which gets initialized to the address of nines. Since it is a pointer, you can use array syntax to access the value at the address pointed to. We assume it is a little endian machine, so ptr[0] is the first (least significant) byte of nines, ptr[1] is the next, etc.
231 is then the value of the least significant byte, in hex it is 0xe7
I have two functions that print 32bit number in binary.
First one divides the number into bytes and starts printing from the last byte (from the 25th bit of the whole integer).
Second one is more straightforward and starts from the 1st bit of the number.
It seems to me that these functions should have different outputs, because they process the bits in different orders. However the outputs are the same. Why?
#include <stdio.h>
void printBits(size_t const size, void const * const ptr)
{
unsigned char *b = (unsigned char*) ptr;
unsigned char byte;
int i, j;
for (i=size-1;i>=0;i--)
{
for (j=7;j>=0;j--)
{
byte = (b[i] >> j) & 1;
printf("%u", byte);
}
}
puts("");
}
void printBits_2( unsigned *A) {
for (int i=31;i>=0;i--)
{
printf("%u", (A[0] >> i ) & 1u );
}
puts("");
}
int main()
{
unsigned a = 1014750;
printBits(sizeof(a), &a); // ->00000000000011110111101111011110
printBits_2(&a); // ->00000000000011110111101111011110
return 0;
}
Both your functions print binary representation of the number from the most significant bit to the least significant bit. Today's PCs (and majority of other computer architectures) use so-called Little Endian format, in which multi-byte values are stored with least significant byte first.
That means that 32-bit value 0x01020304 stored on address 0x1000 will look like this in the memory:
+--------++--------+--------+--------+--------+
|Address || 0x1000 | 0x1001 | 0x1002 | 0x1003 |
+--------++--------+--------+--------+--------+
|Data || 0x04 | 0x03 | 0x02 | 0x01 |
+--------++--------+--------+--------+--------+
Therefore, on Little Endian architectures, printing value's bits from MSB to LSB is equivalent to taking its bytes in reversed order and printing each byte's bits from MSB to LSB.
This is the expected result when:
1) You use both functions to print a single integer, in binary.
2) Your C++ implementation is on a little-endian hardware platform.
Change either one of these factors (with printBits_2 appropriately adjusted), and the results will be different.
They don't process the bits in different orders. Here's a visual:
Bytes: 4 3 2 1
Bits: 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1
Bits: 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
The fact that the output is the same from both of these functions tells you that your platform uses Little-Endian encoding, which means the most significant byte comes last.
The first two rows show how the first function works on your program, and the last row shows how the second function works.
However, the first function will fail on platforms that use Big-Endian encoding and output the bits in this order shown in the third row:
Bytes: 4 3 2 1
Bits: 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1
Bits: 8 7 6 5 4 3 2 1 16 15 14 13 12 11 10 9 24 23 22 21 20 19 18 17 32 31 30 29 28 27 26 25
For the printbits1 function, it is taking the uint32 pointer and assigning it to a char pointer.
unsigned char *b = (unsigned char*) ptr;
Now, in a big endian processor, b[0] will point to the Most significant byte of the uint32 value. The inner loop prints this byte in binary, and then b[1] will point to the next most significant byte in ptr. Therefore this method prints the uint32 value MSB first.
As for printbits2, you are using
unsigned *A
i.e. an unsigned int. This loop runs from 31 to 0 and prints the uint32 value in binary.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Hello I have being trying to develop a C++ function or algorithm that behaves like bit shifting the function will always return 4 byte 00 00 00 00 of any number input ranging from 0 to 99999999
input (int) -> expected output (char or string)
0 -> 00 00 00 00
20 -> 00 00 20 00
200 -> 00 02 00 00
2000 -> 00 20 00 00
99999999-> 99 99 99 99
and can be reversed to return original numbers.
input (char or string)-> expected output (int/double)
00 00 20 00 -> 20
00 02 00 00 -> 200
00 20 00 00 -> 2000
99 99 99 99 -> 99999999
EDIT:
This is the code I have. It came close to what am looking for but still work in progress:
void Convert_to_Decimal(std::string str)
{
double ret;
///insert . after string number 6.
str.insert(5,1,'.');
str.erase(0, str.find_first_not_of('0'));
///remove leading zeros
ret =std::stod(str.c_str());
printf("%g\n", ret);
}
Convert_to_Decimal("00020000");
I will appreciate any pointers or solution to solve this, thank you in advance
Here is a simple solution:
#include <stdint.h>
/* encode a number given as a string into a 4 byte buffer */
void number_convert(unsigned char *dest, const char *str) {
uint32_t v = 0;
while (*str >= '0' && *str <= '9') {
/* parse digits and encode as BCD */
v = (v << 4) + (*str++ - '0');
}
/* make room for 2 decimal places */
v <<= 8;
if (*str == '.') {
if (str[1] >= '0' && str[1] <= '9') {
/* set number of tenths */
v += (str[1] - '0') << 4;
if (str[2] >= '0' && str[2] <= '9') {
/* set number of hundredths */
v += (str[2] - '0');
}
}
}
/* store the BCD value in big endian order */
dest[0] = (v >> 24) & 255;
dest[1] = (v >> 16) & 255;
dest[2] = (v >> 8) & 255;
dest[3] = (v >> 0) & 255;
}
void test(const char *str) {
unsigned char buf[4];
number_convert(buf, str);
printf("%s -> %02X %02X %02X %02X\n", str, buf[0], buf[1], buf[2], buf[3]);
}
int main(void) {
test("0");
test("20");
test("200");
test("2000");
test("123.1");
test("999999.99");
return 0;
}
EDIT
Your code uses a float variable. Your question is unclear: do you want to compute 4 bytes? To do this, you should use a byte array, otherwise, please expand with a more precise explanation for chat you are trying to achieve.
To perform the conversion from your 4 byte digit array back to a number, you can do this:
double convert_BCD_to_double(unsigned char *str) {
long res = 0;
for (int i = 0; i < 4; i++) {
res = res * 100 + (str[i] >> 4) * 10 + (str[i] & 15);
}
return (double)res / 100;
}
For integers, let us define shifting as multiplying or dividing the number by its representation base.
For decimal, a shift right:
300 --> 30
hexadecimal:
0x345 --> 0x34
binary:
1101 --> 110
For decimal, shifting right one digit requires dividing by 10. For hexadecimal, divide by 16 and for binary, divide by 2.
Shifting left is multiplying by the base: decimal - multiply by 10, hexadecimal by 16 and binary by 2.
When the shift goes beyond the edges of the number, you cannot restore the original number by shifting the other direction.
For example, shifting 345 right one digit yields 34. There is no way to get the 5 back by shifting left one digit. The common rule is when a number is shifted, a new digit of 0 is introduced. Thus 34 shifted left one digit yields 340.
Regarding your floating point. I don't see how the bytes 99 99 99 99 produces 999999.99. Is the last byte always to the right of the decimal point?
For shifting bytes, use the operators << and >>. You want to use the largest size integer that contains your byte quantity, such as uint32_t for 4 byte values. Also, use unsigned numbers because you don't want the signed representation to interfere with the shifting.
Edit 1: Example functions
uint32_t Shift_Left(unsigned int value, unsigned int quantity)
{
while (quantity > 0)
{
value = value * 2;
}
return value;
}
uint32_t Shift_Left(unsigned value, unsigned int quantity)
{
return value << quantity;
}
For shifting by bytes, set quantity to 8 or multiples of 8 (8 bits per byte).