This code takes 2 byte int and exchanges the bytes.
Why the line in this code i commented seems to be not working?
INPUT/OUTPUT
*When i input 4 expected output is 1024 instead the 9th to 16th bits were all set to "1" after passing that line.
*Then i tried input 65280, whose expected output is 255 instead it outputs 65535 (sets all 16 bits to "1"
#include<stdio.h>
int main(void)
{
short unsigned int num;
printf("Enter the number: ");
fscanf(stdin,"%hu",&num);
printf("\nNumber with no swap between bytes---> %hu\n",num);
unsigned char swapa,swapb;
swapa=~num;
num>>=8;
swapb=~num;
num=~swapa;
num<<=8;
num=~swapb; //this line is not working why
printf("Swaped bytes value----> %hu\n",num);
}
Integral promotions also the current value of num is getting clobbering by the commented line, probably want a |=, +=, or ^=.
Related
This question already has answers here:
Count the number of set bits in a 32-bit integer
(65 answers)
Closed 7 years ago.
i am trying to count number of bits in 64 bit integer but it shows some unexpected.here is the code.
counting bits is was not the major part but some unexpected output is....plz have a look on input and output!!!!
#include<iostream>
#include<math.h>
#include<stdint.h>
#include<cstdio>
using namespace std;
int64_t t,n,ans;
int main(){
cin>>t;
while(t--){
int64_t ans=0;
cin>>n;
/*
while(n>0LL){
n>>=1LL;
ans++;
}//*/
ans=floor(log2(n));
//ans=floor(log2l(n));
cout<<ans<<"\n";
}
return 0;
}
the input and output is this
10
18446744073709551615
63 (number of bits in 18446744073709551615) should be printed only once and console should be waiting till i input another number and the count the number of bits in other number.but it is not happening.The output comes is like this....
63
63
63
63
63
63
63
63
63
63
plz help me ragarding the same.
In addition to the above explanations how to properly count bits this is why it goes on until t exhausted without asking for more input:
Problem is in the size of your value: 18446744073709551615
It is bigger than signed long long. So, once you enter it into cin the stream becomes not good(): failbit is set:
failbit - Logical error on i/o operation
From here http://www.cplusplus.com/reference/istream/istream/operator%3E%3E/ :
Extracts and parses characters ... interpret them as ... the proper type...
...Then (if good), ...adjusting the stream's internal state flags accordingly.
So, in your case
you read first value 18446744073709551615, failbit is set.
you process whatever is in the n (should be numeric_limits<long long>::max()), ignoring any input errors
you try reading next value
since failbit is set cin>>n does nothing leaving old n as is
this repeats until t exhausted
On x86-64 there's the POPCNT instruction. https://en.wikipedia.org/wiki/SSE4#POPCNT_and_LZCNT
https://msdn.microsoft.com/en-us/library/bb385231.aspx
unsigned __int64 __popcnt64(
unsigned __int64 value
);
GCC
__builtin_popcountll ((long long) x);
I came from this question where I wanted to write 2 integers to a single byte that were garunteed to be between 0-16 (4 bits each).
Now if I close the file, and run a different program that reads....
for (int i = 0; i < 2; ++i)
{
char byteToRead;
file.seekg(i, std::ios::beg);
file.read(&byteToRead, sizeof(char));
bool correct = file.bad();
unsigned int num1 = (byteToRead >> 4);
unsigned int num2 = (byteToRead & 0x0F);
}
The issue is, sometimes this works but other times I'm having the first number come out negative and the second number is something like 10 or 9 all the time and they were most certainly not the numbers I wrote!
So here, for example, the first two numbers work, but the next number does not. For examplem, the output of the read above would be:
At byte 0, num1 = 5 and num2 = 6
At byte 1, num1 = 4294967289 and num2 = 12
At byte 1, num1 should be 9. It seems the 12 writes fine but the 9 << 4 isn't working. The byteToWrite on my end is byteToWrite -100 'œ''
I checked out this question which has a similar problem I think but I feel like my endian is right here.
The right-shift operator preserves the value of the left-most bit. If the left-most bit is 0 before the shift, it will still be 0 after the shift; if it is 1, it will still be 1 after the shift. This allow to preserve the value's sign.
In your case, you combine 9 (0b1001) with 12 (0b1100), so you write 0b10011100 (0x9C). The bit #7 is 1.
When byteToRead is right-shifted, you get 0b11111001 (0xF9), but it is implicitly converted to an int. The convertion from char to int also preserve the value's sign, so it produce 0xFFFFFFF9. Then the implicit int is implicitly converted to a unsigned int. So num1 contains 0xFFFFFFF9 which is 4294967289.
There is 2 solutions:
cast byteToRead into a unsigned char when doing the right-shift;
apply a mask to the shift's result to only keep the 4 bits you want.
The problem originates with byteToRead >> 4 . In C, any arithmetic operations are performed in at least int precision. So the first thing that happens is that byteToRead is promoted to int.
These promotions are value-preserving. Your system has plain char as signed, i.e. having range -128 through to 127. Your char might have been initially -112 (bit pattern 10010000), and then after promotion to int it retains its value of -112 (bit pattern 11111...1110010000).
The right-shift of a negative value is implementation-defined but a common implementation is to do an "arithmetic shift", i.e. perform division by two; so you end up with the result of byteToRead >> 4 being -7 (bit pattern 11111....111001).
Converting -7 to unsigned int results in UINT_MAX - 6 which is 4295967289, because unsigned arithmetic is defined as wrapping around mod UINT_MAX+1 .
To fix this you need to convert to unsigned before performing the arithmetic . You could cast (or alias) byteToRead to unsigned char, e.g.:
unsigned char byteToRead;
file.read( (char *)&byteToRead, 1 );
I have tried these 2 following codes:
int main()
{
int val=-125;
char code=val;
cout<<"\t"<<code<<" "<<(int)code;
getch();
}
The output i got is a^ -125
The second code is:
int main()
{
int val=-125;
unsigned char code=val;
cout<<"\t"<<code<<" "<<(int)code;
getch();
}
The output i got is: a^ 131
after trying both the codes is it safe to conclude that a character can have 2 ASCII values or my approach to find ASCII value(s) is flawed?
P.S.-
I was unable to upload the pictures of my output, so I am forced to type the output where the character I got isn't present in the standard keyboard.
In both examples 'code' has the same bitwise value. The first bit is 1, because it was a negativ number. Since both 'codes' have the same value the output character is the same (converting from number->character treats the number as an unsigned value).
After that you convert your character back to a (signed) interger. This conversion respects the type and the sign of you char.
->unsigned char -> int -> int always positiv
->char -> int -> int has the same sign as the char (and because the first bit was 1 it's negativ here)
unsigned integers in C++ have modulo 2n behavior, where n is the number of value bits.
that means if your char has 8 bits, then unsigned char has modulo 256 behavior.
this behavior is as if the values 0 through 255 were placed on a clockface. any operation that produces a result that goes past the 0-255 divide just effectively wraps around. just like arithmetic with hours on a clockface.
which means that assigning the value -125 yields the corresponding value in the range 0 through 255, namely -125 + 256 = 131.
I was going through the following code. It basically truncates the the digits of the character entered through cin object. The problem is I don't know how assigning an int value to a character object truncates the digits except for the first.
#include <iostream>
using namespace std;
int main(){
unsigned int integer;
unsigned char character;
cin >> integer;
character = integer;
cout << character ;
}
The problem is I don't know how assigning an int value to a character object truncates the digits except for the first.
Let's for the sake of illustration assume that char is unsigned and is 8 bits wide, and int is 32 bits wide. What such an assignment would do is chop off the top 24 bits, leaving the bottom 8.
The truncation does not have anything to do with the decimal digits of the integer. For example, 9999 would become 15 (because 9999 & 0xFF == 15).
I am not sure what you mean by "except for the first." but let me see if I can explain what is happening.
unsigned char is, I believe, required by the standard to be 1 byte in length. int is typically much longer, 4 bytes is typical. Thus when you enter a number >255, it looses all the value above that since all it can hold is one byte and leading 3 bytes of data are lost.
Please see the simple code below:
#include <iostream>
#include <stdlib.h>
using namespace std;
int main(void)
{
unsigned long currentTrafficTypeValueDec;
long input;
input=63;
currentTrafficTypeValueDec = (unsigned long) 1LL << input;
cout << currentTrafficTypeValueDec << endl;
printf("%u \n", currentTrafficTypeValueDec);
printf("%ld \n", currentTrafficTypeValueDec);
return 0;
}
Why printf() displays the currentTrafficTypeValueDec (unsigned long) with negative value?
The output is:
9223372036854775808
0
-9223372036854775808
%d is a signed formatter. Reinterpreting the bits of currentTrafficTypeValueDec (2 to the 63rd power) as a signed long gives a negative value. So printf() prints a negative number.
Maybe you want to use %lu?
You lied to printf by passing it an unsigned value while the format spec said it would be a signed one.
Fun with bits...
cout is printing the number as an Unsigned Long, all 64 bits are significant and print as unsigned binary integer (I think the format here would be %lu).
printf(%u ... treats the input as an normal unsigned integer (32 bits?). This causes bits 33 through 64 to drop off - leaving zero.
printf(%ld ... treats the input as a 64 bit signed number and just prints it out as such.
The thing you might find confusing about the last printf is that it gives the same absolute value as cout, but with a minus sign. When viewing as an unsigned integer all 64 bits are significant in producing the integer value. However for signed numbers, bit 64 is the sign bit. When the sign bit is set (as it is in your example) it indicates the remaining 63 bits are to be treated as a negative number represented in 2's compliment. Positive numbers are printed simply by converting their binary value to decimal. However for a negative number the following happens: Print a negative sign, XOR bits 1 through 63 with binary '1' bits, add 1 to the result and print the unsigned value. By dropping the sign bit (bit 64) you end up with 63 '0' bits, XORing with '1' bits gives you 63 '1' bits, add +1 and the whole thing rolls over to give you an unsigned integer having bit 64 set to '1' and the rest set to '0' - which is the same thing you got with cout BUT, as a negative number.
Once you have worked out why the above explanation is correct you should also be able to make sense out of this
Because you're shifting the 1 into the sign bit of the variable. When you print it as a signed value, it's negative.
the variable doesn't carry its type in itself. You specify to printf its type. Try:
printf("%lu \n", currentTrafficTypeValueDec);
because ld meand long signed that is not true.
You're printing %ld, or a long signed decimal. That's why it's returning a negative value.