Invert 7th bit of hex string C++ [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Change bit of hex number with leading zeros in C++,(C)
I have this number in hex string:
002A05(7th bit is set to 0)
I need to invert 7-th bit of this number, so after conversion I will get
022A05
But in case
ABCDEF(7th bit is set to 1)
I need to get
A9CDEF
But it has to work with every 6 chars hex number.
It need to be 7th bit from left. I'm trying convert OUI to modified EUI64
I tried converting hex string to integer via strtol, but that function strip leading zeros.
Please help me how can I solve it.

Simplest way, but not necessarily the cleanest;
Since only one char is affected, you can just do it using a simple string manipulation; assuming your input is in uppercase in the string input;
input[1] = "23016745AB*******89EFCD"[input[1]-48];

#include <stdio.h>
void flipme(char *buf, const char *inBuf)
{
int x;
sscanf(inBuf, "%x", &x);
x ^= 1 << 17;
sprintf(buf, "%06X", x);
}
int main(void)
{
char buf[16];
flipme(buf, "002A05");
printf("002A05->%s\n", buf);
flipme(buf, "ABCDEF");
printf("ABCDEF->%s\n", buf);
}
Output:
002A05->022A05
ABCDEF->A9CDEF
You wrote:
I tried converting hex string to integer via strtol, but that function strip leading zeros.
The strtol function converts it to a number. It doesn't mean anything to say it strips leading zeroes because numbers don't have leading zeroes -- "6" and "06" are two different ways of writing the same number. If you want leading zeroes when you print it, you can add them then.

num ^ 0x020000
^ is the bitwise xor operator.

Given an integer x, the number with 3rd bit inverted is x ^ 2.
The rest of the answer was given to you earlier.
Note: in the question the bits are counted from highest to lowest, starting at 1. Then 7th bit of a 6 digit hex number is the 3rd bit of its 2nd highest character. Normally bits are counted from lowest to highest, starting from 0 though.

Related

Scanf does not read leading 0s if digit after leading 0s are 8 or 9

I have a very strange question which stems from a bug of a C++11 program I have been writing.
See the following code:
long long a[1000];
int main(int argc, char * argv[]) {
for(long long i = 0; i < 300; ++i) {
scanf("%lli", &a[i]);
std::cout << a[i] << std::endl;
}
return 0;
}
Trying the inputs 1, 2 etc we get outputs 1\n, 2\n, etc. like expected. This also works for inputs like 001 where we get 1\n, 0004 where we get 4\n.
However when the digit after the leading zeros is an 8 or 9, the scanf() reads the leading zeroes first, then reads the digits after.
For example:
Input: 0009, output: 000\n9\n.
Input: 08, output 0\n8\n.
Input: 00914, output 00\n914\n.
I've done some testing and for these cases it seems the scanf() reads the leading zeros first, and the remaining digits are left in the buffer, which are picked up on the second run of the loop.
Can someone hint at what is going on?
I am using XCode 11.3.7 and compiling with Clang C++11. (I haven't messed with the project settings)
Thank you in advance!!!
Use %lld instead of %lli.
The reason %i doesn't work is because 0 is interpreted as a prefix for octal numbers, and the digits 8 and 9 don't exist in octal:
d Matches an optionally signed decimal integer; the next pointer must be a pointer to int.
i Matches an optionally signed integer; the next pointer must be a pointer to int. The integer is read in base 16 if it begins with 0x or 0X, in base 8 if it begins with 0, and in base 10 otherwise. Only characters
that correspond to the base are used.
You would also get the wrong answer for other numbers, e.g. 010 in octal would be parsed as 8.
Or, even better: use C++ instead of C.
std::cin >> a[i];

WAP to convert octal to binary

In a C++ program to convert octal numbers to binary, I was able to run the program, but the problem is the output is coming,
Enter number in octal: 1
1 in octal:1
But as we know octal is a 3 bit number so what to do to bring the result as 001?
For information, I declared a function, and used a return value of 'binary'(a variable I declared) in the function definition.
Will I declare the binary variable as binary [3]? Or if not, what is the correct way?
#include <iomanip>
cout << setfill('0') << setw(3) << binary
If leading zeros matter to you... this could sort of be a dup. See "How can I pad an int with leading zeros when using cout << operator?" ( How can I pad an int with leading zeros when using cout << operator? ).
Anyway, aside from (in spite of?) the lack of example code in the question... :-)
I would suggest looking at the answer from that post about printf(), which I find to be much more concise than cout formatting.
Try this (assuming your number is stored in a variable named 'i', adjust accordingly):
int i = 0113; // should yield decimal=75, hex=4B, and octal=113.
// important: note that we give printf 3 arguments after the string: i, i, i.
printf("decimal: %06d hex: %06x octal: %06o\n", i, i, i); // zero filled, all 6 long.
printf("decimal: %6d hex: %6x octal: %6o\n", i, i, i); // note lack of leading zeros, still 6 long.
printf("decimal: %d hex: %x octal: %o\n", i, i, i); // just as long as they need to be (1 to many).
This is one place (of many) to get some printf formatting info: http://www.lix.polytechnique.fr/~liberti/public/computing/prog/c/C/FUNCTIONS/format.html
fyi - Internally your integers are already in binary, and are already probably using 32 (or maybe 64) bits each... depending on whether they're shot, int, long and so on. So I suspect there isn't much to change your variable declarations.
The trick here is to understand that printf (or cout, if you prefer) generates the characters to represent the binary number (i, above) on your output. This is why the compiler wants you to use 123 for decimal, 0123 for octal (not the leading zero) and 0x123 for hex so it can convert them from characters in your source code (or input at run time) into the proper binary format in a given variable's memory.

Regarding conversion of text to hex via ASCII in C++

So, I've looked up how to do conversion from text to hexadecimal according to ASCII, and I have a working solution (proposed on here). My problem is that I don't understand why it works. Here's my code:
#include <string>
#include <iostream>
int main()
{
std::string str1 = "0123456789ABCDEF";
std::string output[2];
std::string input;
std::getline(std::cin, input);
output[0] = str1[input[0] & 15];
output[1] = str1[input[0] >> 4];
std::cout << output[1] << output[0] << std::endl;
}
Which is all well and good - it returns the hexadecimal value for single characters, however, what I don't understand is this:
input[0] & 15
input[0] >> 4
How can you perform bitwise operations on a character from a string? And why does it oh-so-nicely return the exact values we're after?
Thanks for any help! :)
In C++ a character is 8 bits long.
If you '&' it with 15 (binary 1111), then the least significant 4 bits are outputted to the first digit.
When you apply right shift by 4, then it is equivalent of dividing the character value by 16. This gives you the most significant 4 bits for second digit.
Once the above digit values are calculated, the required character is picked up from the constant string str1 having all the characters in their respective positions.
"Characters in a string" are not characters (individual strings of one character only). In some programming languages they are. In Javascript, for example,
var string = "testing 1,2,3";
var character = string[0];
returns "t".
In C and C++, however, 'strings' are arrays of 8-bit characters; each element of the array is an 8-bit number from 0..255.
Characters are just integers. In ASCII the character '0' is the integer 48. C++ makes this conversion implicitly in many contexts, including the one in your code.

printf: Displaying an SHA1 hash in hexadecimal

I have been following the msdn example that shows how to hash data using the Windows CryptoAPI. The example can be found here: http://msdn.microsoft.com/en-us/library/windows/desktop/aa382380%28v=vs.85%29.aspx
I have modified the code to use the SHA1 algorithm.
I don't understand how the code that displays the hash (shown below) in hexadecmial works, more specifically I don't understand what the >> 4 operator and the & 0xf operator do.
if (CryptGetHashParam(hHash, HP_HASHVAL, rgbHash, &cbHash, 0)){
printf("MD5 hash of file %s is: ", filename);
for (DWORD i = 0; i < cbHash; i++)
{
printf("%c%c", rgbDigits[rgbHash[i] >> 4],
rgbDigits[rgbHash[i] & 0xf]);
}
printf("\n");
}
I would be grateful if someone could explain this for me, thanks in advance :)
x >> 4 shifts x right four bits. x & 0xf does a bitwise and between x and 0xf. 0xf has its four least significant bits set, and all the other bits clear.
Assuming rgbHash is an array of unsigned char, this means the first expression retains only the four most significant bits and the second expression the four least significant bits of the (presumably) 8-bit input.
Four bits is exactly what will fit in one hexadecimal digit, so each of those is used to look up a hexadecimal digit in an array which presumably looks something like this:
char rgbDigits[] = "0123456789abcdef"; // or possibly upper-case letters
this code uses simple bit 'filtering' techniques
">> 4" means shift right by 4 places, which in turn means 'divide by 16'
"& 0xf" equals to bit AND operation which means 'take first 4 bits'
Both these values are passed to rgbDigits which proly produced output in valid range - human readable

Why are the bits of a std::bitset in reverse order? [duplicate]

This question already has answers here:
Why does std::bitset expose bits in little-endian fashion?
(2 answers)
Closed 6 years ago.
Why does bitset store the bits in reverse order? After strugging many times I have finally written this binary_to_dec. Could it simplified?
int binary_to_dec(std::string bin)
{
std::bitset<8> bit;
int c = bin.size();
for (size_t i = 0; i < bin.size(); i++,c--)
{
bit.set(c-1, (bin[i]-'0' ? true : false));
}
return bit.to_ulong();
}
Bitset stores its numbers in what you consider to be "reverse" order because we write the digits of a number in decreasing order of significance even though the characters of a string are arranged in increasing index order.
If we wrote our numbers in little-endian order, then you wouldn't have this confusion because the character at index 0 of your string would represent bit 0 of the bitset. But we write our numbers in big-endian order. I'm afraid I don't know the details of human history that led to that convention. (And note that the endianness that any particular CPU uses to store multi-byte numbers is irrelevant. I'm talking about the endianness we use when displaying numbers for humans to read.)
For example, if we write the decimal number 12 in binary, we get 1100. The least significant bit is on the right. We call that "bit 0." But if we put that in a string, "1100", the character at index 0 of that string represents bit 3, not bit 0. If we created a bitset with the bits in the same order as the characters, to_ulong would return 3 instead of 12.
The bitset class has a constructor that accepts a std::string, but it expects the index of the character to match the index of the bit, so you need to reverse the string. Try this:
int binary_to_dec(std::string const& bin)
{
std::bitset<8> bit(std::string(bin.rbegin(), bin.rend()));
return bit.to_ulong();
}
unsigned long binary_to_dec(std::string bin)
{
std::bitset<sizeof(unsigned long)*8> bits(bin);
return bits.to_ulong();
}
EDIT: formatting and return type.