Hexadecimal representation of double fractional part for SHA-256 - c++

I am trying to write a SHA-256 hash function for practice. In the wikipedia stands that the initial hash values are given by the fractional parts of the square roots of the first 8 primes 2..19. Now i am trying to calculate them. What i have done so far:
#include <vector>
#include <cstdint>
#include <cmath>
#include <cstdio>
// fill primes with all prime values between min and max value
int getPrimes(uint32_t min, uint32_t max, std::vector<uint32_t>* primes)
{
if (min < 1) min = 1; // primes can only be >= 1
if (min > max) return 0; // max has to be larger than min
for (uint32_t value = min; value <= max; value++)
{
uint32_t tmp;
for (tmp = 2; tmp <= sqrt(value); tmp++) // start to check with 2, because 1 is always going to work
{
if (value % tmp == 0)
{
break;
}
}
if (tmp > sqrt(value)) primes->push_back(value); // if no other integer divisor is found, add number to vector
}
return 0;
}
int main()
{
std::vector<uint32_t> primes;
getPrimes(2, 20, &primes); // fills vector with all prime values between 2 and 20
double tmp = sqrt(primes[0]); // get square root, returns double
printf("value %f\n", tmp); // debug
printf("size of double %i\n", sizeof(double)); // get representation byte size
double * tmpOffset = &tmp; // get value offset
unsigned char * tmpChar = (unsigned char*)tmpOffset; // convert to char pointer
printf("address of variable %i\n", &tmp); // debug
printf("raw values\n1:%X\n2:%X\n3:%X\n4:%X\n5:%X\n6:%X\n7:%X\n8:%X\n",
(uint8_t)tmpChar[0], (uint8_t)tmpChar[1], (uint8_t)tmpChar[2], (uint8_t)tmpChar[3],
(uint8_t)tmpChar[4], (uint8_t)tmpChar[5], (uint8_t)tmpChar[6], (uint8_t)tmpChar[7]);
return 0;
}
This returns the first 8 primes, calculates the square root of 2 and fetches directly from the memory location where it is stored the actual byte values:
value 1.414214
size of double 8
address of variable 6881016
raw values
1:CD
2:3B
3:7F
4:66
5:9E
6:A0
7:F6
8:3F
Compared to the value given in the wikipedia article 0x6a09e667 it looks awfully wrong what i am doing here. Is there a remapping happening or how excatly is the binary reresentation of a double?Can someone point me in the right direction how to correctly calculate the fractional part in hex?
Edit:
Thanks for your help! It is not pretty but does work for now.
printf("raw fractional part:\n0x%02X %02X %02X %02X %02X %02X %02X\n",
(uint8_t)(0xf & tmpChar[6]), (uint8_t)tmpChar[5], (uint8_t)tmpChar[4], (uint8_t)tmpChar[3],
(uint8_t)tmpChar[2], (uint8_t)tmpChar[1], (uint8_t)tmpChar[0]);
uint32_t fracPart = (0xf & tmpChar[6]);
fracPart <<= 8;
fracPart |= tmpChar[5];
fracPart <<= 8;
fracPart |= tmpChar[4] ;
fracPart <<= 8;
fracPart |= tmpChar[3];
fracPart <<= 4;
fracPart |= (0xf0 & tmpChar[2]) >> 4;
printf("fractional part: %X\n", fracPart);
Edit2
A little bit of a nicer implementation:
uint32_t fracPart2 = *(uint32_t*)((char*)&tmp + 3); // point to fractional part - 4 bit
fracPart2 <<= 4; // shift to correct value
fracPart2 |= (0xf0 & *((char*)&tmp + 2)) >> 4; // append last 4 bit
printf("beautiful fractional part: %X\n", fracPart2);
This solution is highly platform dependant and in a second approach i am going for something like in the link of comment 2.
Edit3
So this is my final solution, which does not depend on the internal representation of a double and calculates the fraction just using math.
uint32_t getFractionalPart(double value)
{
uint32_t retValue = 0;
for (uint8_t i = 0; i < 8; i++)
{
value = value - floor(value);
retValue <<= 4;
value *= 16;
retValue += floor(value);
}
return retValue;
}

One thing to keep in mind is that the double here is 64 bits.
If you look at the IEEE representation of doubles at the below link, it has 1 sign bit, 11 exponent bits and the remaining are the precision bits.
Now when you look at the output you've gotten, take a look at the nibbles I've put in quotes, do they look familiar?
The reason the number is backwards is because of endianness. The 12th bit is where the fractional part is starting in this case, and then it moves backwards.
1:CD
2:3B
3:'7'F
4:'66'
5:'9E'
6:'A0'
7:F'6'
8:3F
or
8:3F
7:F'6'
6:'A0'
5:'9E'
4:'66'
3:'7'F
2:3B
1:CD
https://en.wikipedia.org/wiki/Double-precision_floating-point_format

Related

Next higher number with same number of set bits and some bits fixed

I'm trying to find a way to, given a number of bits that need to be set, and a few indexes of bits that should remain fixed, generate the next higher number with the same number of set bits, that has all the fixed bits in place. This is closely related to https://www.chessprogramming.org/Traversing_Subsets_of_a_Set#Snoobing_the_Universe
The difference is that I want to keep some of the bits unchanged, and I'm trying to do this as efficiently as possible / something close to the snoob function, that given a number and the conditions, bithacks its way into the next one (So I'm trying to avoid iterating through all the smaller subsets and seeing which ones contain the required bits, for example).
For example, if I have the universe of numbers {1,2,...,20}, I'd like to, given a number with bits {2,5,6} set, generate the smallest number with 6 set bits that has bit {2,5,6} set, and then the number after that etc
Solution 1 (based on "Snoobing the Universe")
One solution is to define a value corresponding to the "fixed bits" and one corresponding to the "variable bits".
E.g. for bits {2, 5, 6}, the "fixed bits" value would be 0x64 (assuming bits are counted starting from 0).
The "variable bits" is initialized with the smallest value having the remaining number of bits. E.g. if we want a total of 6 bits and have 3 fixed bits, the remaining number of bits is 6-3=3, so the "variable bits" starting value is 0x7.
Now the resulting value is calculated by "blending" the two bit sets by inserting the "variable bits" into the places where the "fixed bits" are 0 (see the function blend() below).
To get the next value, the "variable bits" are modified using the linked "Snoobing the Universe" function (snoob() below) and the result is again obtained by "blending" the fixed and variable bits.
All in all, a solution is as follows (prints the first 10 numbers as an example):
#include<stdio.h>
#include<stdint.h>
uint64_t snoob (uint64_t x) {
uint64_t smallest, ripple, ones;
smallest = x & -x;
ripple = x + smallest;
ones = x ^ ripple;
ones = (ones >> 2) / smallest;
return ripple | ones;
}
uint64_t blend(uint64_t fixed, uint64_t var)
{
uint64_t result = fixed;
uint64_t maskResult = 1;
while(var != 0)
{
if((result & maskResult) == 0)
{
if((var & 1) != 0)
{
result |= maskResult;
}
var >>= 1;
}
maskResult <<= 1;
}
return result;
}
int main(void)
{
const uint64_t fixedBits = 0x64; // Bits 2, 5, 6 must be set
const int additionalBits = 3;
uint64_t varBits = ((uint64_t)1 << additionalBits) - 1;
uint64_t value;
for(unsigned i = 0; i < 10; i++)
{
value = blend(fixedBits, varBits);
printf("%u: decimal=%llu hex=0x%04llx\n", i, value, value);
varBits = snoob(varBits); // Get next value for variable bits
}
}
Solution 2 (based on "Snoobing any Sets")
Another solution based on the linked "Snoobing any Sets" (function snoobSubset()below) is to define the "variale set" as the bits which are not fixed and then initialize the "variable bits" as the n least significant of these bits (see function getLsbOnes() below). In the example case, n=3.
This solution is as follows:
#include<stdio.h>
#include<stdint.h>
uint64_t getLsbOnes(uint64_t value, unsigned count)
{
uint64_t mask = 1;
while(mask != 0)
{
if(count > 0)
{
if((mask & value) != 0)
{
count--;
}
}
else
{
value &= ~mask;
}
mask <<= 1;
}
return value;
}
// get next greater subset of set with same number of one bits
uint64_t snoobSubset (uint64_t sub, uint64_t set) {
uint64_t tmp = sub-1;
uint64_t rip = set & (tmp + (sub & (0-sub)) - set);
for(sub = (tmp & sub) ^ rip; sub &= sub-1; rip ^= tmp, set ^= tmp)
tmp = set & (0-set);
return rip;
}
int main(void)
{
const uint64_t fixedBits = 0x64;
const int additionalBits = 3;
const uint64_t varSet = ~fixedBits;
uint64_t varBits = getLsbOnes(varSet, additionalBits);
uint64_t value;
for(unsigned i = 0; i < 10; i++)
{
value = fixedBits | varBits;
printf("%u: decimal=%llu hex=0x%04llx\n", i, value, value);
varBits = snoobSubset(varBits, varSet);
}
}
Example output
The output for both solutions should be:
0: decimal=111 hex=0x006f
1: decimal=119 hex=0x0077
2: decimal=125 hex=0x007d
3: decimal=126 hex=0x007e
4: decimal=231 hex=0x00e7
5: decimal=237 hex=0x00ed
6: decimal=238 hex=0x00ee
7: decimal=245 hex=0x00f5
8: decimal=246 hex=0x00f6
9: decimal=252 hex=0x00fc

Adding positive and negative numbers in IEEE-754 format

My problem seems to be pretty simple: I wrote a program that manually adds floating point numbers together. This program has certain restrictions. (such as no iostream or use of any unary operators), so that is the reason for the lack of those things. As for the problem, the program seems to function correctly when adding two positive floats (1.5 + 1.5 = 3.0, for example), but when adding two negative numbers (10.0 + -5.0) I get very wacky numbers. Here is the code:
#include <cstdio>
#define BIAS32 127
struct Real
{
//sign bit
int sign;
//UNBIASED exponent
long exponent;
//Fraction including implied 1. at bit index 23
unsigned long fraction;
};
Real Decode(int float_value);
int Encode(Real real_value);
Real Normalize(Real value);
Real Add(Real left, Real right);
unsigned long Add(unsigned long leftop, unsigned long rightop);
unsigned long Multiply(unsigned long leftop, unsigned long rightop);
void alignExponents(Real* left, Real* right);
bool is_neg(Real real);
int Twos(int op);
int main(int argc, char* argv[])
{
int left, right;
char op;
int value;
Real rLeft, rRight, result;
if (argc < 4) {
printf("Usage: %s <left> <op> <right>\n", argv[0]);
return -1;
}
sscanf(argv[1], "%f", (float*)&left);
sscanf(argv[2], "%c", &op);
sscanf(argv[3], "%f", (float*)&right);
rLeft = Decode(left);
rRight = Decode(right);
if (op == '+') {
result = Add(rLeft, rRight);
}
else {
printf("Unknown operator '%c'\n", op);
return -2;
}
value = Encode(result);
printf("%.3f %c %.3f = %.3f (0x%08x)\n",
*((float*)&left),
op,
*((float*)&right),
*((float*)&value),
value
);
return 0;
}
Real Decode(int float_value)
{ // Test sign bit of float_value - Test exponent bits of float_value & apply bias - Test mantissa bits of float_value
Real result{ float_value >> 31 & 1 ? 1 : 0, ((long)Add(float_value >> 23 & 0xFF, -BIAS32)), (unsigned long)float_value & 0x7FFFFF };
return result;
};
int Encode(Real real_value)
{
int x = 0;
x |= real_value.fraction; // Set the fraction bits of x
x |= real_value.sign << 31; // Set the sign bits of x
x |= Add(real_value.exponent, BIAS32) << 23; // Set the exponent bits of x
return x;
}
Real Normalize(Real value)
{
if (is_neg(value))
{
value.fraction = Twos(value.fraction);
}
unsigned int i = 0;
while (i < 9)
{
if ((value.fraction >> Add(23, i)) & 1) // If there are set bits past the mantissa section
{
value.fraction >>= 1; // shift mantissa right by 1
value.exponent = Add(value.exponent, 1); // increment exponent to accomodate for shift
}
i = Add(i, 1);
}
return value;
}
Real Add(Real left, Real right)
{
Real a = left, b = right;
alignExponents(&a, &b); // Aligns exponents of both operands
unsigned long sum = Add(a.fraction, b.fraction);
Real result = Normalize({ a.sign, a.exponent, sum }); // Normalize result if need be
return result;
}
unsigned long Add(unsigned long leftop, unsigned long rightop)
{
unsigned long sum = 0, test = 1; // sum initialized to 0, test created to compare bits
while (test) // while test is not 0
{
if (leftop & test) // if the digit being tested is 1
{
if (sum & test) sum ^= test << 1; // if the sum tests to 1, carry a bit over
sum ^= test;
}
if (rightop & test)
{
if (sum & test) sum ^= test << 1;
sum ^= test;
}
test <<= 1;
}
return sum;
}
void alignExponents(Real* a, Real* b)
{
if (a->exponent != b->exponent) // If the exponents are not equal
{
if (a->exponent > b->exponent)
{
int disp = a->exponent - b->exponent; // number of shifts needed based on difference between two exponents
b->fraction |= 1 << 23; // sets the implicit bit for shifting
b->exponent = a->exponent; // sets exponents equal to each other
b->fraction >>= disp; // mantissa is shifted over to accomodate for the increase in power
return;
}
int disp = b->exponent - a->exponent;
a->fraction |= 1 << 23;
a->exponent = b->exponent;
a->fraction >>= disp;
return;
}
return;
}
bool is_neg(Real real)
{
if (real.sign) return true;
return false;
}
int Twos(int op)
{
return Add(~op, -1); // NOT the operand and add 1 to it
}
On top of that, I just tested the values 10.5 + 5.5 and got a 24.0, so there appears to be even more wrong with this than I initially thought. I've been working on this for days and would love some help/advice.
Here is some help/advice. Now that you have worked on some of the code, I suggest going back and reworking your data structure. The declaration of such a crucial data structure would benefit from a lot more comments, making sure you know exactly what each field means.
For example, the implicit bit is not always 1. It is zero if the exponent is zero. That should be dealt with in your Encode and Decode functions. For the rest of your code, it is just a significand bit and should not have any special handling.
When you start thinking about rounding, you will find you often need more than 23 bits in an intermediate result.
Making the significand of negative numbers 2's complement will create a problem of having the same information stored two ways. You will have both a sign bit as though doing sign-and-magnitude and have the sign encoded in the signed integer signficand. Keeping them consistent will be a mess. Whatever you decide about how Real will store negative numbers, document it and keep it consistent throughout.
If I were implementing this I would start by defining Real very, very carefully. I would then decide what operations I wanted to be able to do on Real, and write functions to do them. If you get those right each function will be relatively simple.

Bitwise operator to calculate checksum

Am trying to come up with a C/C++ function to calculate the checksum of a given array of hex values.
char *hex = "3133455D332015550F23315D";
For e.g., the above buffer has 12 bytes and then last byte is the checksum.
Now what needs to done is, convert the 1st 11 individual bytes to decimal and then take there sum.
i.e., 31 = 49,
33 = 51,.....
So 49 + 51 + .....................
And then convert this decimal value to Hex. And then take the LSB of that hex value and convert that to binary.
Now take the 2's complement of this binary value and convert that to hex. At this step, the hex value should be equal to 12th byte.
But the above buffer is just an example and so it may not be correct.
So there're multiple steps involved in this.
Am looking for an easy way to do this using bitwise operators.
I did something like this, but it seems to take the 1st 2 bytes and doesn't give me the right answer.
int checksum (char * buffer, int size){
int value = 0;
unsigned short tempChecksum = 0;
int checkSum = 0;
for (int index = 0; index < size - 1; index++) {
value = (buffer[index] << 8) | (buffer[index]);
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFFFF) + 1) & 0xFFFF;
}
I couldn't get this logic to work. I don't have enough embedded programming behind me to understand the bitwise operators. Any help is welcome.
ANSWER
I got this working with below changes.
for (int index = 0; index < size - 1; index++) {
value = buffer[index];
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFF) + 1) & 0xFF;
Using addition to obtain a checksum is at least weird. Common checksums use bitwise xor or full crc. But assuming it is really what you need, it can be done easily with unsigned char operations:
#include <stdio.h>
char checksum(const char *hex, int n) {
unsigned char ck = 0;
for (int i=0; i<n; i+=1) {
unsigned val;
int cr = sscanf(hex + 2 * i, "%2x", &val); // convert 2 hexa chars to a byte value
if (cr == 1) ck += val;
}
return ck;
}
int main() {
char hex[] = "3133455D332015550F23315D";
char ck = checksum(hex, 11);
printf("%2x", (unsigned) (unsigned char) ck);
return 0;
}
As the operation are made on an unsigned char everything exceeding a byte value is properly discarded and you obtain your value (26 in your example).

Remove nth bit from buffer, and shift the rest

Giving a uint8_t buffer of x length, I am trying to come up with a function or a macro that can remove nth bit (or n to n+i), then left-shift the remaining bits.
example #1:
for input 0b76543210 0b76543210 ... then output should be 0b76543217 0b654321 ...
example #2: if the input is:
uint8_t input[8] = {
0b00110011,
0b00110011,
...
};
the output without the first bit, should be
uint8_t output[8] = {
0b00110010,
0b01100100,
...
};
I have tried the following to remove the first bit, but it did not work for the second group of bits.
/* A macro to extract (a-b) range of bits without shifting */
#define BIT_RANGE(N,x,y) ((N) & ((0xff >> (7 - (y) + (x))) << ((x))))
void removeBit0(uint8_t *n) {
for (int i=0; i < 7; i++) {
n[i] = (BIT_RANGE(n[i], i + 1, 7)) << (i + 1) |
(BIT_RANGE(n[i + 1], 1, i + 1)) << (7 - i); /* This does not extract the next element bits */
}
n[7] = 0;
}
Update #1
In my case, the input will be uint64_t number, then I will use memmov to shift it one place to the left.
Update #2
The solution can be in C/C++, assembly(x86-64) or inline assembly.
This is really 2 subproblems: remove bits from each byte and pack the results. This is the flow of the code below. I wouldn't use a macro for this. Too much going on. Just inline the function if you're worried about performance at that level.
#include <stdio.h>
#include <stdint.h>
// Remove bits n to n+k-1 from x.
unsigned scrunch_1(unsigned x, int n, int k) {
unsigned hi_bits = ~0u << n;
return (x & ~hi_bits) | ((x >> k) & hi_bits);
}
// Remove bits n to n+k-1 from each byte in the buffer,
// then pack left. Return number of packed bytes.
size_t scrunch(uint8_t *buf, size_t size, int n, int k) {
size_t i_src = 0, i_dst = 0;
unsigned src_bits = 0; // Scrunched source bit buffer.
int n_src_bits = 0; // Initially it's empty.
for (;;) {
// Get scrunched bits until the buffer has at least 8.
while (n_src_bits < 8) {
if (i_src >= size) { // Done when source bytes exhausted.
// If there are left-over bits, add one more byte to output.
if (n_src_bits > 0) buf[i_dst++] = src_bits << (8 - n_src_bits);
return i_dst;
}
// Pack 'em in.
src_bits = (src_bits << (8 - k)) | scrunch_1(buf[i_src++], n, k);
n_src_bits += 8 - k;
}
// Write the highest 8 bits of the buffer to the destination byte.
n_src_bits -= 8;
buf[i_dst++] = src_bits >> n_src_bits;
}
}
int main(void) {
uint8_t x[] = { 0xaa, 0xaa, 0xaa, 0xaa };
size_t n = scrunch(x, 4, 2, 3);
for (size_t i = 0; i < n; i++) {
printf("%x ", x[i]);
}
printf("\n");
return 0;
}
This writes b5 ad 60, which by my reckoning is correct. A few other test cases work as well.
Oops I coded it the first time shifting the wrong way, but include that here in case it's useful to someone.
#include <stdio.h>
#include <stdint.h>
// Remove bits n to n+k-1 from x.
unsigned scrunch_1(unsigned x, int n, int k) {
unsigned hi_bits = 0xffu << n;
return (x & ~hi_bits) | ((x >> k) & hi_bits);
}
// Remove bits n to n+k-1 from each byte in the buffer,
// then pack right. Return number of packed bytes.
size_t scrunch(uint8_t *buf, size_t size, int n, int k) {
size_t i_src = 0, i_dst = 0;
unsigned src_bits = 0; // Scrunched source bit buffer.
int n_src_bits = 0; // Initially it's empty.
for (;;) {
// Get scrunched bits until the buffer has at least 8.
while (n_src_bits < 8) {
if (i_src >= size) { // Done when source bytes exhausted.
// If there are left-over bits, add one more byte to output.
if (n_src_bits > 0) buf[i_dst++] = src_bits;
return i_dst;
}
// Pack 'em in.
src_bits |= scrunch_1(buf[i_src++], n, k) << n_src_bits;
n_src_bits += 8 - k;
}
// Write the lower 8 bits of the buffer to the destination byte.
buf[i_dst++] = src_bits;
src_bits >>= 8;
n_src_bits -= 8;
}
}
int main(void) {
uint8_t x[] = { 0xaa, 0xaa, 0xaa, 0xaa };
size_t n = scrunch(x, 4, 2, 3);
for (size_t i = 0; i < n; i++) {
printf("%x ", x[i]);
}
printf("\n");
return 0;
}
This writes d6 5a b. A few other test cases work as well.
Something similar to this should work:
template<typename S> void removeBit(S* buffer, size_t length, size_t index)
{
const size_t BITS_PER_UNIT = sizeof(S)*8;
// first we find which data unit contains the desired bit
const size_t unit = index / BITS_PER_UNIT;
// and which index has the bit inside the specified unit, starting counting from most significant bit
const size_t relativeIndex = (BITS_PER_UNIT - 1) - index % BITS_PER_UNIT;
// then we unset that bit
buffer[unit] &= ~(1 << relativeIndex);
// now we have to shift what's on the right by 1 position
// we create a mask such that if 0b00100000 is the bit removed we use 0b00011111 as mask to shift the rest
const S partialShiftMask = (1 << relativeIndex) - 1;
// now we keep all bits left to the removed one and we shift left all the others
buffer[unit] = (buffer[unit] & ~partialShiftMask) | ((buffer[unit] & partialShiftMask) << 1);
for (int i = unit+1; i < length; ++i)
{
//we set rightmost bit of previous unit according to last bit of current unit
buffer[i-1] |= buffer[i] >> (BITS_PER_UNIT-1);
// then we shift current unit by one
buffer[i] <<= 1;
}
}
I just tested it on some basic cases so maybe something is not exactly correct but this should move you onto the right track.

Binary-Decimal Negative bit set

How can I tell if a binary number is negative?
Currently I have the code below. It works fine converting to Binary. When converting to decimal, I need to know if the left most bit is 1 to tell if it is negative or not but I cannot seem to figure out how to do that.
Also, instead of making my Bin2 function print 1's an 0's, how can I make it return an integer? I didn't want to store it in a string and then convert to int.
EDIT: I'm using 8 bit numbers.
int Bin2(int value, int Padding = 8)
{
for (int I = Padding; I > 0; --I)
{
if (value & (1 << (I - 1)))
std::cout<< '1';
else
std::cout<<'0';
}
return 0;
}
int Dec2(int Value)
{
//bool Negative = (Value & 10000000);
int Dec = 0;
for (int I = 0; Value > 0; ++I)
{
if(Value % 10 == 1)
{
Dec += (1 << I);
}
Value /= 10;
}
//if (Negative) (Dec -= (1 << 8));
return Dec;
}
int main()
{
Bin2(25);
std::cout<<"\n\n";
std::cout<<Dec2(11001);
}
You are checking for negative value incorrectly. Do the following instead:
bool Negative = (value & 0x80000000); //It will work for 32-bit platforms only
Or may be just compare it with 0.
bool Negative = (value < 0);
Why don't you just compare it to 0. Should work fine and almost certainly you can't do this in a manner more efficient than the compiler.
I am entirely unclear if this is what the OP is looking for, but its worth a toss:
If you know you have a value in a signed int that is supposed to be representing a signed 8-bit value, you can pull it apart, store it in a signed 8-bit value, then promote it back to a native int signed value like this:
#include <stdio.h>
int main(void)
{
// signed integer, value is 245. 8bit signed value is (-11)
int num = 0xF5;
// pull out the low 8 bits, storing them in a signed char.
signed char ch = (signed char)(num & 0xFF);
// now let the signed char promote to a signed int.
int res = ch;
// finally print both.
printf("%d ==> %d\n",num, res);
// do it again for an 8 bit positive value
// this time with just direct casts.
num = 0x70;
printf("%d ==> %d\n", num, (int)((signed char)(num & 0xFF)));
return 0;
}
Output
245 ==> -11
112 ==> 112
Is that what you're trying to do? In short, the code above will take the 8bits sitting at the bottom of num, treat them as a signed 8-bit value, then promote them to a signed native int. The result is you can now "know" not only whether the 8-bits were a negative number (since res will be negative if they were), you also get the 8-bit signed number as a native int in the process.
On the other hand, if all you care about is whether the 8th bit is set in the input int, and is supposed to denote a negative value state, then why not just :
int IsEightBitNegative(int val)
{
return (val & 0x80) != 0;
}