Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm looking for a simple code (only c++ strings and loops allowed) which reads a one-line string text as a string and then outputs this string "encrypted by shifting".
It should look like this:
Please enter the text to be encrypted: abc def xyz!ABC XYZ?
Please enter the number of shift positions (as a positive integer):
3
def ghi abc!DEF ABC?
User input is underlined in bold.
And this is how for I got:
#include <iostream>
#include <string>
using namespace std;
int main()
{
string decrypted_Text, int number_of_movements;
cout << "Please enter the text to be encrypted: ";
cin >> decrypted_Text;
cout << "Please enter the number of shift positions (as a positive integer): ";
cin >> number_of_movements;
system("PAUSE");
return(0);
}
I know that text.size() will be used here or rather can imagine that but just don't know how to implement that.
The biggest problems are potential overflows. So, we need to deal with that.
Then we need to understand what Encryption and decryption means. If encryption will shift everthing one to the right, decryption will shift it back to left again.
So, with "def" and key=1, the encrpyted string will be "efg".
And decrpytion with key=1, will shift it to left again. Result: "def"
We can observe that we simply need to shift by -1, so the negative of the key.
So, basically encryption and decryption can be done with the same routine. We just need to invert the keys.
Let us look now at the overflow problematic. For the moment we will start with uppercase characters only. Characters have an associated code. For example, in ASCII, the letter 'A' is encoded with 65, 'B' with 66 and so on. Because we do not want to calculate with such number, we normalize them. We simply subtract 'A' from each character. Then
'A' - 'A' = 0
'B' - 'A' = 1
'C' - 'A' = 2
'D' - 'A' = 3
You see the pattern. If we want to encrypt now the letter 'C' with key 3, we can do the following.
'C' - 'A' + 3 = 5 Then we add again 'A' to get back the letter and we will get 5 + 'A' = 'F'
That is the whole magic.
But what to do with an overflow, beyond 'Z'. This can be handled by a simple modulo division.
Let us look at 'Z' + 1. We do 'Z' - 'A' = 25, then +1 = 26 and now modulo 26 = 0 then plus 'A' will be 'A'
And so on and so on. The resulting Formula is: (c-'A'+key)%26+'A'
Next, what with negative keys? This is also simple. Assume an 'A' and key=-1
Result will be a 'Z'. But this is the same as shifting 25 to the right. So, we can simply convert a negative key to a positive shift. The simple statement will be:
if (key < 0) key = (26 + (key % 26)) % 26;
And then we can call our tranformation function with a simple Lambda. One function for encryption and decrytion. Just with an inverted key.
And with the above formular, there is even no need to check for a negative values. It will work for positive and negative values.
So, key = (26 + (key % 26)) % 26; will always work.
Some extended information, if you work with ASCII character representation. Please have a look at any ASCII table. You will see that any uppercase and lowercase character differ by 32. Or, if you look in binary:
char dez bin char dez bin
'A' 65 0100 0001 'a' 97 0110 0001
'B' 66 0100 0010 'b' 98 0110 0010
'C' 67 0100 0011 'b' 99 0110 0011
. . .
So, if you already know that a character is alpha, then teh only difference between upper- and lowercase is bit number 5. If we want to know, if char is lowercase, we can get this by masking this bit. c & 0b0010 0000 that is equal to c & 32 or c & 0x20.
If we want to operater on either uppercase or lowercase characters, the we can mask the "case" away. With c & 0b00011111 or c & 31 or c & 0x1F we will get always equivalents for uppercase charcters, already normalized to start with one.
char dez bin Masking char dez bin Masking
'A' 65 0100 0001 & 0x1b = 1 'a' 97 0110 0001 & 0x1b = 1
'B' 66 0100 0010 & 0x1b = 2 'b' 98 0110 0010 & 0x1b = 2
'C' 67 0100 0011 & 0x1b = 3 'b' 99 0110 0011 & 0x1b = 3
. . .
So, if we use an alpha character, mask it, and subtract 1, then we get as a result 0..25 for any upper- or lowercase character.
Additionally, I would like tor repeat the key handling. Positive keys will encrypt a string, negative keys will decrypt a string. But, as said above, negative keys can be transormed into positive ones. Example:
Shifting by -1 is same as shifting by +25
Shifting by -2 is same as shifting by +24
Shifting by -3 is same as shifting by +23
Shifting by -4 is same as shifting by +22
So,it is very obvious that we can calculate an always positive key by: 26 + key. For negative keys, this will give us the above offsets.
And for positve keys, we would have an overflow over 26, which we can elimiate by a modulo 26 division:
'A'--> 0 + 26 = 26 26 % 26 = 0
'B'--> 1 + 26 = 27 27 % 26 = 1
'C'--> 2 + 26 = 28 28 % 26 = 2
'D'--> 3 + 26 = 29 29 % 26 = 3
--> (c + key) % 26 will eliminate overflows and result in the correct new en/decryptd character.
And, if we combine this with the bove wisdom for negative keys, we can write: ((26+(key%26))%26) which will work for all positive and negative keys.
Combining that with that masking, could give us the following program:
const char potentialLowerCaseIndicator = c & 0x20;
const char upperOrLower = c & 0x1F;
const char normalized = upperOrLower - 1;
const int withOffset = normalized + ((26+(key%26))%26);
const int withOverflowCompensation = withOffset % 26;
const char newUpperCaseCharacter = (char)withOverflowCompensation + 'A';
const char result = newUpperCaseCharacter | (potentialLowerCaseIndicator );
Of course, all the above many statements can be converted into one Lambda:
#include <string>
#include <algorithm>
#include <cctype>
#include <iostream>
// Simple function for Caesar encyption/decyption
std::string caesar(const std::string& in, int key) {
std::string res(in.size(), ' ');
std::transform(in.begin(), in.end(), res.begin(), [&](char c) {return std::isalpha(c) ? (char)((((c & 31) - 1 + ((26 + (key % 26)) % 26)) % 26 + 65) | (c & 32)) : c; });
return res;
}
int main() {
std::string test{ "aBcDeF xYzZ" };
std::cout << caesar(test, 5);
}
The last function can also be made more verbose:
std::string caesar1(const std::string& in, int key) {
std::string res(in.size(), ' ');
auto convert = [&](const char c) -> char {
char result = c;
if (std::isalpha(c)) {
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Check and remember if the original character was lower case
const bool originalIsLower = std::islower(c);
// We want towork with uppercase only
const char upperCaseChar = (char)std::toupper(c);
// But, we want to start with 0 and not with 'A' (65)
const int normalized = upperCaseChar - 'A';
// Now add the key
const int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
const int capped = shifted % 26;
// Get back a character
const char convertedUppcase = (char)capped + 'A';
// And set back the original case
result = originalIsLower ? (char)std::tolower(convertedUppcase) : convertedUppcase;
}
return result;
};
std::transform(in.begin(), in.end(), res.begin(), convert);
return res;
}
EDIT
Please see below a solution with only simplest statements.
#include <iostream>
#include <string>
using namespace std;
string caesar(string in, int key) {
// Here we will store the resulting encrypted/decrypted string
string result{};
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Read character by character from the string
for (unsigned int i = 0; i < in.length(); ++i) {
char c = in[i];
// CHeck for alpha character
if ((c >= 'A' and c <= 'Z') or (c >= 'a' and c <= 'z')) {
// Check and remember if the original character was lower case
bool originalIsLower = (c >= 'a' and c <= 'z');
// We want to work with uppercase only
char upperCaseChar = originalIsLower ? c - ('a' - 'A') : c;
// But, we want to start with 0 and not with 'A' (65)
int normalized = upperCaseChar - 'A';
// Now add the key
int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
int capped = shifted % 26;
// Get back a character
char convertedUppcase = (char)capped + 'A';
// And set back the original case
result += originalIsLower ? convertedUppcase + ('a' - 'A') : convertedUppcase;
}
else
result += c;
}
return result;
}
int main() {
string test{ "aBcDeF xYzZ" };
string encrypted = caesar(test, 5);
string decrypted = caesar(encrypted, -5);
cout << "Original: " << test << '\n';
cout << "Encrpyted: " << encrypted << '\n';
cout << "Decrpyted: " << decrypted << '\n';
}
Related
The following code will decrypt a caesar encrypted string given the ciphertext and the key:
#include <iostream>
std::string decrypt(std::string cipher, int key) {
std::string d = "";
for(int i=0; i<cipher.length();i++) {
d += ((cipher[i]-65-key+26) %26)+65;
}
return d;
}
int main()
{
std::cout << decrypt("WKLVLVJRRG", 3) << std::endl; // THISISGOOD
std::cout << decrypt("NBCMCMAIIX", 20) << std::endl; // THISISGOOD
}
I'm having trouble to understand the operations performed to compute the new character ASCII code at this line:
d += ((cipher[i]-65-key+26) %26)+65;
The first subtraction should shift the number range
Then we will subtract the key as how the Caesar decryption is defined
We add 26 to deal with negative numbers (?)
The module will limit the output as the range of the ASCII numbers is 26 length
We come back to the old range by adding 65 at the end
What am I missing?
If we reorder the expression slightly, like this:
d += (((cipher[i] - 65) + (26 - key)) % 26) + 65;
We get a formula for rotating cipher[i] left by key:
cipher[i] - 65 brings the ASCII range A..Z into an integer range 0..25
(cipher[i] - 65 + 26 - key) % 26 rotates that value left by key (subtracts key modulo 26)
+ 65 to shift the range 0..25 back into ASCII range A..Z.
e.g. given a key of 2, A becomes Y, B becomes Z, C becomes A, etc.
Let me give you a detailed explanation about Caesar Cipher for understanding that formular. I will also show ultra simple code examples, but also more advanced one liners.
The biggest problems are potential overflows. So, we need to deal with that.
Then we need to understand what Encryption and decryption means. If encryption will shift everthing one to the right, decryption will shift it back to left again.
So, with "def" and key=1, the encrpyted string will be "efg".
And decrpytion with key=1, will shift it to left again. Result: "def"
We can observe that we simply need to shift by -1, so the negative of the key.
So, basically encryption and decryption can be done with the same routine. We just need to invert the keys.
Let us look now at the overflow problematic. For the moment we will start with uppercase characters only. Characters have an associated code. For example, in ASCII, the letter 'A' is encoded with 65, 'B' with 66 and so on. Because we do not want to calculate with such number, we normalize them. We simply subtract 'A' from each character. Then
'A' - 'A' = 0
'B' - 'A' = 1
'C' - 'A' = 2
'D' - 'A' = 3
You see the pattern. If we want to encrypt now the letter 'C' with key 3, we can do the following.
'C' - 'A' + 3 = 5 Then we add again 'A' to get back the letter and we will get 5 + 'A' = 'F'
That is the whole magic.
But what to do with an overflow, beyond 'Z'. This can be handled by a simple modulo division.
Let us look at 'Z' + 1. We do 'Z' - 'A' = 25, then +1 = 26 and now modulo 26 = 0 then plus 'A' will be 'A'
And so on and so on. The resulting Formula is: (c-'A'+key)%26+'A'
Next, what with negative keys? This is also simple. Assume an 'A' and key=-1
Result will be a 'Z'. But this is the same as shifting 25 to the right. So, we can simply convert a negative key to a positive shift. The simple statement will be:
if (key < 0) key = (26 + (key % 26)) % 26;
And then we can call our tranformation function with a simple Lambda. One function for encryption and decrytion. Just with an inverted key.
And with the above formular, there is even no need to check for a negative values. It will work for positive and negative values.
So, key = (26 + (key % 26)) % 26; will always work.
Some extended information, if you work with ASCII character representation. Please have a look at any ASCII table. You will see that any uppercase and lowercase character differ by 32. Or, if you look in binary:
char dez bin char dez bin
'A' 65 0100 0001 'a' 97 0110 0001
'B' 66 0100 0010 'b' 98 0110 0010
'C' 67 0100 0011 'b' 99 0110 0011
. . .
So, if you already know that a character is alpha, then the only difference between upper- and lowercase is bit number 5. If we want to know, if char is lowercase, we can get this by masking this bit. c & 0b0010 0000 that is equal to c & 32 or c & 0x20.
If we want to operater on either uppercase or lowercase characters, the we can mask the "case" away. With c & 0b00011111 or c & 31 or c & 0x1F we will get always equivalents for uppercase charcters, already normalized to start with one.
char dez bin Masking char dez bin Masking
'A' 65 0100 0001 & 0x1b = 1 'a' 97 0110 0001 & 0x1b = 1
'B' 66 0100 0010 & 0x1b = 2 'b' 98 0110 0010 & 0x1b = 2
'C' 67 0100 0011 & 0x1b = 3 'b' 99 0110 0011 & 0x1b = 3
. . .
So, if we use an alpha character, mask it, and subtract 1, then we get as a result 0..25 for any upper- or lowercase character.
Additionally, I would like tor repeat the key handling. Positive keys will encrypt a string, negative keys will decrypt a string. But, as said above, negative keys can be transormed into positive ones. Example:
Shifting by -1 is same as shifting by +25
Shifting by -2 is same as shifting by +24
Shifting by -3 is same as shifting by +23
Shifting by -4 is same as shifting by +22
So,it is very obvious that we can calculate an always positive key by: 26 + key. For negative keys, this will give us the above offsets.
And for positve keys, we would have an overflow over 26, which we can elimiate by a modulo 26 division:
'A'--> 0 + 26 = 26 26 % 26 = 0
'B'--> 1 + 26 = 27 27 % 26 = 1
'C'--> 2 + 26 = 28 28 % 26 = 2
'D'--> 3 + 26 = 29 29 % 26 = 3
--> (c + key) % 26 will eliminate overflows and result in the correct new en/decryptd character.
And, if we combine this with the above wisdom for negative keys, we can write: ((26+(key%26))%26) which will work for all positive and negative keys.
Combining that with that masking, could give us the following program:
const char potentialLowerCaseIndicator = c & 0x20;
const char upperOrLower = c & 0x1F;
const char normalized = upperOrLower - 1;
const int withOffset = normalized + ((26+(key%26))%26);
const int withOverflowCompensation = withOffset % 26;
const char newUpperCaseCharacter = (char)withOverflowCompensation + 'A';
const char result = newUpperCaseCharacter | (potentialLowerCaseIndicator );
Of course, all the above many statements can be converted into one Lambda:
#include <string>
#include <algorithm>
#include <cctype>
#include <iostream>
// Simple function for Caesar encyption/decyption
std::string caesar(const std::string& in, int key) {
std::string res(in.size(), ' ');
std::transform(in.begin(), in.end(), res.begin(), [&](char c) {return std::isalpha(c) ? (char)((((c & 31) - 1 + ((26 + (key % 26)) % 26)) % 26 + 65) | (c & 32)) : c; });
return res;
}
int main() {
std::string test{ "aBcDeF xYzZ" };
std::cout << caesar(test, 5);
}
The last function can also be made more verbose for easier understanding:
std::string caesar1(const std::string& in, int key) {
std::string res(in.size(), ' ');
auto convert = [&](const char c) -> char {
char result = c;
if (std::isalpha(c)) {
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Check and remember if the original character was lower case
const bool originalIsLower = std::islower(c);
// We want towork with uppercase only
const char upperCaseChar = (char)std::toupper(c);
// But, we want to start with 0 and not with 'A' (65)
const int normalized = upperCaseChar - 'A';
// Now add the key
const int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
const int capped = shifted % 26;
// Get back a character
const char convertedUppcase = (char)capped + 'A';
// And set back the original case
result = originalIsLower ? (char)std::tolower(convertedUppcase) : convertedUppcase;
}
return result;
};
std::transform(in.begin(), in.end(), res.begin(), convert);
return res;
}
And if you want to see a solution with only the simplest statements, then see the below.
#include <iostream>
#include <string>
using namespace std;
string caesar(string in, int key) {
// Here we will store the resulting encrypted/decrypted string
string result{};
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Read character by character from the string
for (unsigned int i = 0; i < in.length(); ++i) {
char c = in[i];
// CHeck for alpha character
if ((c >= 'A' and c <= 'Z') or (c >= 'a' and c <= 'z')) {
// Check and remember if the original character was lower case
bool originalIsLower = (c >= 'a' and c <= 'z');
// We want to work with uppercase only
char upperCaseChar = originalIsLower ? c - ('a' - 'A') : c;
// But, we want to start with 0 and not with 'A' (65)
int normalized = upperCaseChar - 'A';
// Now add the key
int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
int capped = shifted % 26;
// Get back a character
char convertedUppcase = (char)capped + 'A';
// And set back the original case
result += originalIsLower ? convertedUppcase + ('a' - 'A') : convertedUppcase;
}
else
result += c;
}
return result;
}
int main() {
string test{ "aBcDeF xYzZ" };
string encrypted = caesar(test, 5);
string decrypted = caesar(encrypted, -5);
cout << "Original: " << test << '\n';
cout << "Encrpyted: " << encrypted << '\n';
cout << "Decrpyted: " << decrypted << '\n';
}
I referred a program for time conversion and not able to understand some part of the code. Here's the full program.
#include<iostream>
#include<cstdio>
using namespace std;
int main() {
string s;
cin >> s;
int n = s.length();
int hh, mm, ss;
hh = (s[0] - '0') * 10 + (s[1] - '0');
mm = (s[3] - '0') * 10 + (s[4] - '0');
ss = (s[6] - '0') * 10 + (s[7] - '0');
if (hh < 12 && s[8] == 'P') hh += 12;
if (hh == 12 && s[8] == 'A') hh = 0;
printf("%02d:%02d:%02d\n", hh, mm, ss);
return 0;
}
The part of code i am not able to understand is
hh = (s[0] - '0') * 10 + (s[1] - '0');
mm = (s[3] - '0') * 10 + (s[4] - '0');
ss = (s[6] - '0') * 10 + (s[7] - '0');
Thanks in advance.
If you see e.g. this ASCII table (ASCII being the most common character encoding scheme) you can see that the character '2' has the decimal value 50, and that the character '0' has the decimal value 48.
Considering that a character is just really a small integer, we can use normal arithmetic on them. That means if you do e.g. '2' - '0' that's the same as doing 50 - 48 which results in the decimal value 2.
So to get the decimal value of a character digit, just subtract '0'.
The multiplication with 10 is because we're dealing with the decimal system, where a number such as 21 is the same as 2 * 10 + 1.
It should be noted that the C++ specification explicitly says that all digits have to be encoded in a contiguous range, so it doesn't matter which encoding is used this will always work.
You might see this "trick" being used to get a decimal value for letters as well, but note that the C++ specification doesn't say anything about that. In fact there are encodings where the range of letters is not contiguous and where this will not work. It's only specified to work on digits.
In the ASCCI encoding numbers are encoded sequentially. This is:
'0' has the value 48
'1' has the value 49
'2' has the value 50
etc
Therefor, 'x' - '0' == x from '0' to '9'
For example:
if you have your string 12:23:53 we have:
hh = (s[0] - '0') * 10 + (s[1] - '0');
(s[0] - '0') means '1'-'0' which is equal to 1, but this is the ten, so *10 + (s[1] - '0') which is '2'-'0', so 2. In total 12.
Same thing for the minutes and seconds.
If I have a char array A, I use it to store hex
A = "0A F5 6D 02" size=11
The binary representation of this char array is:
00001010 11110101 01101101 00000010
I want to ask is there any function can random flip the bit?
That is:
if the parameter is 5
00001010 11110101 01101101 00000010
-->
10001110 11110001 01101001 00100010
it will random choose 5 bit to flip.
I am trying make this hex data to binary data and use bitmask method to achieve my requirement. Then turn it back to hex. I am curious is there any method to do this job more quickly?
Sorry, my question description is not clear enough. In simply, I have some hex data, and I want to simulate bit error in these data. For example, if I have 5 byte hex data:
"FF00FF00FF"
binary representation is
"1111111100000000111111110000000011111111"
If the bit error rate is 10%. Then I want to make these 40 bits have 4 bits error. One extreme random result: error happened in the first 4 bit:
"0000111100000000111111110000000011111111"
First of all, find out which char the bit represents:
param is your bit to flip...
char *byteToWrite = &A[sizeof(A) - (param / 8) - 1];
So that will give you a pointer to the char at that array offset (-1 for 0 array offset vs size)
Then get modulus (or more bit shifting if you're feeling adventurous) to find out which bit in here to flip:
*byteToWrite ^= (1u << param % 8);
So that should result for a param of 5 for the byte at A[10] to have its 5th bit toggled.
store the values of 2^n in an array
generate a random number seed
loop through x times (in this case 5) and go data ^= stored_values[random_num]
Alternatively to storing the 2^n values in an array, you could do some bit shifting to a random power of 2 like:
data ^= (1<<random%7)
Reflecting the first comment, you really could just write out that line 5 times in your function and avoid the overhead of a for loop entirely.
You have 32 bit number. You can treate the bits as parts of hte number and just xor this number with some random 5-bits-on number.
int count_1s(int )
{
int m = 0x55555555;
int r = (foo&m) + ((foo>>>1)&m);
m = 0x33333333;
r = (r&m) + ((r>>>2)&m);
m = 0x0F0F0F0F;
r = (r&m) + ((r>>>4)&m);
m = 0x00FF00FF;
r = (r&m) + ((r>>>8)&m);
m = 0x0000FFFF;
return r = (r&m) + ((r>>>16)&m);
}
void main()
{
char input[] = "0A F5 6D 02";
char data[4] = {};
scanf("%2x %2x %2x %2x", &data[0], &data[1], &data[2], &data[3]);
int *x = reinterpret_cast<int*>(data);
int y = rand();
while(count_1s(y) != 5)
{
y = rand(); // let's have this more random
}
*x ^= y;
printf("%2x %2x %2x %2x" data[0], data[1], data[2], data[3]);
return 0;
}
I see no reason to convert the entire string back and forth from and to hex notation. Just pick a random character out of the hex string, convert this to a digit, change it a bit, convert back to hex character.
In plain C:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main (void)
{
char *hexToDec_lookup = "0123456789ABCDEF";
char hexstr[] = "0A F5 6D 02";
/* 0. make sure we're fairly random */
srand(time(0));
/* 1. loop 5 times .. */
int i;
for (i=0; i<5; i++)
{
/* 2. pick a random hex digit
we know it's one out of 8, grouped per 2 */
int hexdigit = rand() & 7;
hexdigit += (hexdigit>>1);
/* 3. convert the digit to binary */
int hexvalue = hexstr[hexdigit] > '9' ? hexstr[hexdigit] - 'A'+10 : hexstr[hexdigit]-'0';
/* 4. flip a random bit */
hexvalue ^= 1 << (rand() & 3);
/* 5. write it back into position */
hexstr[hexdigit] = hexToDec_lookup[hexvalue];
printf ("[%s]\n", hexstr);
}
return 0;
}
It might even be possible to omit the convert-to-and-from-ASCII steps -- flip a bit in the character string, check if it's still a valid hex digit and if necessary, adjust.
First randomly chose x positions (each position consist of array index and the bit position).
Now if you want to flip ith bit from right for a number n. Find the remainder of n by 2n as :
code:
int divisor = (2,i);
int remainder = n % divisor;
int quotient = n / divisor;
remainder = (remainder == 0) ? 1 : 0; // flip the remainder or the i th bit from right.
n = divisor * quotient + remainder;
Take mod 8 of input(5%8)
Shift 0x80 to right by input value (e.g 5)
XOR this value with (input/8)th element of your character array.
code:
void flip_bit(int bit)
{
Array[bit/8] ^= (0x80>>(bit%8));
}
I have this number in hex string:
002A05.
I need to set 7-th bit of this number to 1, so after conversion I will get
022A05
But it has to work with every 6 chars hex number.
I tried converting hex string to integer via strtol, but that function strip leading zeros.
Please help me how can I solve it.
int hex=0x002A05;
int mask = 0x020000;
printf ("%06X",hex | mask);
hope this helps
In a 24-bit number bit #7 (counting from the left, as you did in your example, not from the right, as is done conventionally) is always going to be in the second byte from the left. You can solve your problem without converting the entire number to integer by taking that second hex digit, converting it to a number 0..15, setting its bit #3 (again counting from the left), and converting the result back to a hex digit.
int fromHex(char c) {
c = toupper(c);
if (c >= '0' && c <= '9') {
return c-'0';
} else {
return c-'A'+10;
}
}
char toHexDigit(int n) {
return n < 10 ? '0'+n : 'A'+n-10;
}
char myNum[] = "002A05";
myNum[1] = toHexDigit(fromHex(myNum[1]) | 2);
printf("%s\n", myNum);
This prints '022A05' (link to ideone).
It sounds to me like you have a string, not a hex constant, that you want to manipulate. You can do it pretty easily by bit twiddling the ascii value of the hex character. If you have char representing a hex character like char h = '6';, char h = 'C';, or char h = '';, you can set the 3rd from the left (2nd from the right) bit in the number that the character represents using:
h = h > '7' ? h <= '9' ? h + 9 : ((h + 1) | 2) - 1 : h | 2;
So you can do this to the second character (4 + 3 bits) in your string. This works for any hex string with 2 or more characters. Here is your example:
char hex_string[] = "002A05";
// Get the second character from the string
char h = hex_string[1];
// Calculate the new character
h = h > '7' ? h <= '9' ? h + 9 : ((h + 1) | 2) - 1 : h | 2;
// Set the second character in the string to the result
hex_string[1] = h;
printf("%s", hex_string); // 022A05
You asked about strtol specifically, so to answer your question, just add padding after you convert the number with strtol:
const char *s = "002A05";
int x = strtol(s, NULL, 16);
x |= (1<<17);
printf("%.6X\n",x);
I'm trying to convert an integer to a string right now, and I'm having a problem.
I've gotten the code written and working for the most part, but it has a small flaw when carrying to the next place. It's hard to describe, so I'll give you an example. Using base 26 with a character set consisting of the lowercase alphabet:
0 = "a"
1 = "b"
2 = "c"
...
25 = "z"
26 = "ba" (This should equal "aa")
It seems to skip the character at the zero place in the character set in certain situations.
The thing that's confusing me is I see nothing wrong with my code. I've been working on this for too long now, and I still can't figure it out.
char* charset = (char*)"abcdefghijklmnopqrstuvwxyz";
int charsetLength = strlen(charset);
unsigned long long num = 5678; // Some random number, it doesn't matter
std::string key
do
{
unsigned int remainder = (num % charsetLength);
num /= charsetLength;
key.insert(key.begin(), charset[remainder]);
} while(num);
I have a feeling the function is tripping up over the modulo returning a zero, but I've been working on this so long, I can't figure out how it's happening. Any suggestions are welcome.
EDIT: The fact that the generated string is little endian is irrelevant for my application.
If I understand correctly what you want (the numbering used by excel for columns, A, B, .. Z, AA, AB, ...) this is a based notation able to represent numbers starting from 1. The 26 digits have values 1, 2, ... 26 and the base is 26. So A has value 1, Z value 26, AA value 27... Computing this representation is very similar to the normal reprentation you just need to adjust for the offset of 1 instead of 0.
#include <string>
#include <iostream>
#include <climits>
std::string base26(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
size_t const base = sizeof(digits) - 1;
char result[sizeof(unsigned long)*CHAR_BIT + 1];
char* current = result + sizeof(result);
*--current = '\0';
while (v != 0) {
v--;
*--current = digits[v % base];
v /= base;
}
return current;
}
// for testing
#include <cstdlib>
int main(int argc, char* argv[])
{
for (int i = 1; i < argc; ++i) {
unsigned long value = std::strtol(argv[i], 0, 0);
std::cout << value << " = " << base26(value) << '\n';
}
return 0;
}
Running with 1 2 26 27 52 53 676 677 702 703 gives
1 = A
2 = B
26 = Z
27 = AA
52 = AZ
53 = BA
676 = YZ
677 = ZA
702 = ZZ
703 = AAA
Your problem is that 'a' == 0.
In other words, 'aa' is not the answer, because that is really 00. 'ba' is the correct answer because b = '1', so that makes it 10 in base 26, which is 26 in decimal.
Your code is correct, you just seem to be misunderstanding it.
I think you should make a=1 and z=0 so you have abc...z just as in decimal 1234...90
Compare it to decimal system:9 is followed by 10 and not by 01!
To get Aprogrammers solution to compile on my system (I'm using gcc version 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) I needed to add headers;#include <climits> #include<cstdlib>