Base Conversion Problem - c++

I'm trying to convert an integer to a string right now, and I'm having a problem.
I've gotten the code written and working for the most part, but it has a small flaw when carrying to the next place. It's hard to describe, so I'll give you an example. Using base 26 with a character set consisting of the lowercase alphabet:
0 = "a"
1 = "b"
2 = "c"
...
25 = "z"
26 = "ba" (This should equal "aa")
It seems to skip the character at the zero place in the character set in certain situations.
The thing that's confusing me is I see nothing wrong with my code. I've been working on this for too long now, and I still can't figure it out.
char* charset = (char*)"abcdefghijklmnopqrstuvwxyz";
int charsetLength = strlen(charset);
unsigned long long num = 5678; // Some random number, it doesn't matter
std::string key
do
{
unsigned int remainder = (num % charsetLength);
num /= charsetLength;
key.insert(key.begin(), charset[remainder]);
} while(num);
I have a feeling the function is tripping up over the modulo returning a zero, but I've been working on this so long, I can't figure out how it's happening. Any suggestions are welcome.
EDIT: The fact that the generated string is little endian is irrelevant for my application.

If I understand correctly what you want (the numbering used by excel for columns, A, B, .. Z, AA, AB, ...) this is a based notation able to represent numbers starting from 1. The 26 digits have values 1, 2, ... 26 and the base is 26. So A has value 1, Z value 26, AA value 27... Computing this representation is very similar to the normal reprentation you just need to adjust for the offset of 1 instead of 0.
#include <string>
#include <iostream>
#include <climits>
std::string base26(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
size_t const base = sizeof(digits) - 1;
char result[sizeof(unsigned long)*CHAR_BIT + 1];
char* current = result + sizeof(result);
*--current = '\0';
while (v != 0) {
v--;
*--current = digits[v % base];
v /= base;
}
return current;
}
// for testing
#include <cstdlib>
int main(int argc, char* argv[])
{
for (int i = 1; i < argc; ++i) {
unsigned long value = std::strtol(argv[i], 0, 0);
std::cout << value << " = " << base26(value) << '\n';
}
return 0;
}
Running with 1 2 26 27 52 53 676 677 702 703 gives
1 = A
2 = B
26 = Z
27 = AA
52 = AZ
53 = BA
676 = YZ
677 = ZA
702 = ZZ
703 = AAA

Your problem is that 'a' == 0.
In other words, 'aa' is not the answer, because that is really 00. 'ba' is the correct answer because b = '1', so that makes it 10 in base 26, which is 26 in decimal.
Your code is correct, you just seem to be misunderstanding it.

I think you should make a=1 and z=0 so you have abc...z just as in decimal 1234...90
Compare it to decimal system:9 is followed by 10 and not by 01!

To get Aprogrammers solution to compile on my system (I'm using gcc version 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) I needed to add headers;#include <climits> #include<cstdlib>

Related

Trouble understanding Caesar decryption steps

The following code will decrypt a caesar encrypted string given the ciphertext and the key:
#include <iostream>
std::string decrypt(std::string cipher, int key) {
std::string d = "";
for(int i=0; i<cipher.length();i++) {
d += ((cipher[i]-65-key+26) %26)+65;
}
return d;
}
int main()
{
std::cout << decrypt("WKLVLVJRRG", 3) << std::endl; // THISISGOOD
std::cout << decrypt("NBCMCMAIIX", 20) << std::endl; // THISISGOOD
}
I'm having trouble to understand the operations performed to compute the new character ASCII code at this line:
d += ((cipher[i]-65-key+26) %26)+65;
The first subtraction should shift the number range
Then we will subtract the key as how the Caesar decryption is defined
We add 26 to deal with negative numbers (?)
The module will limit the output as the range of the ASCII numbers is 26 length
We come back to the old range by adding 65 at the end
What am I missing?
If we reorder the expression slightly, like this:
d += (((cipher[i] - 65) + (26 - key)) % 26) + 65;
We get a formula for rotating cipher[i] left by key:
cipher[i] - 65 brings the ASCII range A..Z into an integer range 0..25
(cipher[i] - 65 + 26 - key) % 26 rotates that value left by key (subtracts key modulo 26)
+ 65 to shift the range 0..25 back into ASCII range A..Z.
e.g. given a key of 2, A becomes Y, B becomes Z, C becomes A, etc.
Let me give you a detailed explanation about Caesar Cipher for understanding that formular. I will also show ultra simple code examples, but also more advanced one liners.
The biggest problems are potential overflows. So, we need to deal with that.
Then we need to understand what Encryption and decryption means. If encryption will shift everthing one to the right, decryption will shift it back to left again.
So, with "def" and key=1, the encrpyted string will be "efg".
And decrpytion with key=1, will shift it to left again. Result: "def"
We can observe that we simply need to shift by -1, so the negative of the key.
So, basically encryption and decryption can be done with the same routine. We just need to invert the keys.
Let us look now at the overflow problematic. For the moment we will start with uppercase characters only. Characters have an associated code. For example, in ASCII, the letter 'A' is encoded with 65, 'B' with 66 and so on. Because we do not want to calculate with such number, we normalize them. We simply subtract 'A' from each character. Then
'A' - 'A' = 0
'B' - 'A' = 1
'C' - 'A' = 2
'D' - 'A' = 3
You see the pattern. If we want to encrypt now the letter 'C' with key 3, we can do the following.
'C' - 'A' + 3 = 5 Then we add again 'A' to get back the letter and we will get 5 + 'A' = 'F'
That is the whole magic.
But what to do with an overflow, beyond 'Z'. This can be handled by a simple modulo division.
Let us look at 'Z' + 1. We do 'Z' - 'A' = 25, then +1 = 26 and now modulo 26 = 0 then plus 'A' will be 'A'
And so on and so on. The resulting Formula is: (c-'A'+key)%26+'A'
Next, what with negative keys? This is also simple. Assume an 'A' and key=-1
Result will be a 'Z'. But this is the same as shifting 25 to the right. So, we can simply convert a negative key to a positive shift. The simple statement will be:
if (key < 0) key = (26 + (key % 26)) % 26;
And then we can call our tranformation function with a simple Lambda. One function for encryption and decrytion. Just with an inverted key.
And with the above formular, there is even no need to check for a negative values. It will work for positive and negative values.
So, key = (26 + (key % 26)) % 26; will always work.
Some extended information, if you work with ASCII character representation. Please have a look at any ASCII table. You will see that any uppercase and lowercase character differ by 32. Or, if you look in binary:
char dez bin char dez bin
'A' 65 0100 0001 'a' 97 0110 0001
'B' 66 0100 0010 'b' 98 0110 0010
'C' 67 0100 0011 'b' 99 0110 0011
. . .
So, if you already know that a character is alpha, then the only difference between upper- and lowercase is bit number 5. If we want to know, if char is lowercase, we can get this by masking this bit. c & 0b0010 0000 that is equal to c & 32 or c & 0x20.
If we want to operater on either uppercase or lowercase characters, the we can mask the "case" away. With c & 0b00011111 or c & 31 or c & 0x1F we will get always equivalents for uppercase charcters, already normalized to start with one.
char dez bin Masking char dez bin Masking
'A' 65 0100 0001 & 0x1b = 1 'a' 97 0110 0001 & 0x1b = 1
'B' 66 0100 0010 & 0x1b = 2 'b' 98 0110 0010 & 0x1b = 2
'C' 67 0100 0011 & 0x1b = 3 'b' 99 0110 0011 & 0x1b = 3
. . .
So, if we use an alpha character, mask it, and subtract 1, then we get as a result 0..25 for any upper- or lowercase character.
Additionally, I would like tor repeat the key handling. Positive keys will encrypt a string, negative keys will decrypt a string. But, as said above, negative keys can be transormed into positive ones. Example:
Shifting by -1 is same as shifting by +25
Shifting by -2 is same as shifting by +24
Shifting by -3 is same as shifting by +23
Shifting by -4 is same as shifting by +22
So,it is very obvious that we can calculate an always positive key by: 26 + key. For negative keys, this will give us the above offsets.
And for positve keys, we would have an overflow over 26, which we can elimiate by a modulo 26 division:
'A'--> 0 + 26 = 26 26 % 26 = 0
'B'--> 1 + 26 = 27 27 % 26 = 1
'C'--> 2 + 26 = 28 28 % 26 = 2
'D'--> 3 + 26 = 29 29 % 26 = 3
--> (c + key) % 26 will eliminate overflows and result in the correct new en/decryptd character.
And, if we combine this with the above wisdom for negative keys, we can write: ((26+(key%26))%26) which will work for all positive and negative keys.
Combining that with that masking, could give us the following program:
const char potentialLowerCaseIndicator = c & 0x20;
const char upperOrLower = c & 0x1F;
const char normalized = upperOrLower - 1;
const int withOffset = normalized + ((26+(key%26))%26);
const int withOverflowCompensation = withOffset % 26;
const char newUpperCaseCharacter = (char)withOverflowCompensation + 'A';
const char result = newUpperCaseCharacter | (potentialLowerCaseIndicator );
Of course, all the above many statements can be converted into one Lambda:
#include <string>
#include <algorithm>
#include <cctype>
#include <iostream>
// Simple function for Caesar encyption/decyption
std::string caesar(const std::string& in, int key) {
std::string res(in.size(), ' ');
std::transform(in.begin(), in.end(), res.begin(), [&](char c) {return std::isalpha(c) ? (char)((((c & 31) - 1 + ((26 + (key % 26)) % 26)) % 26 + 65) | (c & 32)) : c; });
return res;
}
int main() {
std::string test{ "aBcDeF xYzZ" };
std::cout << caesar(test, 5);
}
The last function can also be made more verbose for easier understanding:
std::string caesar1(const std::string& in, int key) {
std::string res(in.size(), ' ');
auto convert = [&](const char c) -> char {
char result = c;
if (std::isalpha(c)) {
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Check and remember if the original character was lower case
const bool originalIsLower = std::islower(c);
// We want towork with uppercase only
const char upperCaseChar = (char)std::toupper(c);
// But, we want to start with 0 and not with 'A' (65)
const int normalized = upperCaseChar - 'A';
// Now add the key
const int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
const int capped = shifted % 26;
// Get back a character
const char convertedUppcase = (char)capped + 'A';
// And set back the original case
result = originalIsLower ? (char)std::tolower(convertedUppcase) : convertedUppcase;
}
return result;
};
std::transform(in.begin(), in.end(), res.begin(), convert);
return res;
}
And if you want to see a solution with only the simplest statements, then see the below.
#include <iostream>
#include <string>
using namespace std;
string caesar(string in, int key) {
// Here we will store the resulting encrypted/decrypted string
string result{};
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Read character by character from the string
for (unsigned int i = 0; i < in.length(); ++i) {
char c = in[i];
// CHeck for alpha character
if ((c >= 'A' and c <= 'Z') or (c >= 'a' and c <= 'z')) {
// Check and remember if the original character was lower case
bool originalIsLower = (c >= 'a' and c <= 'z');
// We want to work with uppercase only
char upperCaseChar = originalIsLower ? c - ('a' - 'A') : c;
// But, we want to start with 0 and not with 'A' (65)
int normalized = upperCaseChar - 'A';
// Now add the key
int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
int capped = shifted % 26;
// Get back a character
char convertedUppcase = (char)capped + 'A';
// And set back the original case
result += originalIsLower ? convertedUppcase + ('a' - 'A') : convertedUppcase;
}
else
result += c;
}
return result;
}
int main() {
string test{ "aBcDeF xYzZ" };
string encrypted = caesar(test, 5);
string decrypted = caesar(encrypted, -5);
cout << "Original: " << test << '\n';
cout << "Encrpyted: " << encrypted << '\n';
cout << "Decrpyted: " << decrypted << '\n';
}

Caesar cipher which moves the text about X positions [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm looking for a simple code (only c++ strings and loops allowed) which reads a one-line string text as a string and then outputs this string "encrypted by shifting".
It should look like this:
Please enter the text to be encrypted: abc def xyz!ABC XYZ?
Please enter the number of shift positions (as a positive integer):
3
def ghi abc!DEF ABC?
User input is underlined in bold.
And this is how for I got:
#include <iostream>
#include <string>
using namespace std;
int main()
{
string decrypted_Text, int number_of_movements;
cout << "Please enter the text to be encrypted: ";
cin >> decrypted_Text;
cout << "Please enter the number of shift positions (as a positive integer): ";
cin >> number_of_movements;
system("PAUSE");
return(0);
}
I know that text.size() will be used here or rather can imagine that but just don't know how to implement that.
The biggest problems are potential overflows. So, we need to deal with that.
Then we need to understand what Encryption and decryption means. If encryption will shift everthing one to the right, decryption will shift it back to left again.
So, with "def" and key=1, the encrpyted string will be "efg".
And decrpytion with key=1, will shift it to left again. Result: "def"
We can observe that we simply need to shift by -1, so the negative of the key.
So, basically encryption and decryption can be done with the same routine. We just need to invert the keys.
Let us look now at the overflow problematic. For the moment we will start with uppercase characters only. Characters have an associated code. For example, in ASCII, the letter 'A' is encoded with 65, 'B' with 66 and so on. Because we do not want to calculate with such number, we normalize them. We simply subtract 'A' from each character. Then
'A' - 'A' = 0
'B' - 'A' = 1
'C' - 'A' = 2
'D' - 'A' = 3
You see the pattern. If we want to encrypt now the letter 'C' with key 3, we can do the following.
'C' - 'A' + 3 = 5 Then we add again 'A' to get back the letter and we will get 5 + 'A' = 'F'
That is the whole magic.
But what to do with an overflow, beyond 'Z'. This can be handled by a simple modulo division.
Let us look at 'Z' + 1. We do 'Z' - 'A' = 25, then +1 = 26 and now modulo 26 = 0 then plus 'A' will be 'A'
And so on and so on. The resulting Formula is: (c-'A'+key)%26+'A'
Next, what with negative keys? This is also simple. Assume an 'A' and key=-1
Result will be a 'Z'. But this is the same as shifting 25 to the right. So, we can simply convert a negative key to a positive shift. The simple statement will be:
if (key < 0) key = (26 + (key % 26)) % 26;
And then we can call our tranformation function with a simple Lambda. One function for encryption and decrytion. Just with an inverted key.
And with the above formular, there is even no need to check for a negative values. It will work for positive and negative values.
So, key = (26 + (key % 26)) % 26; will always work.
Some extended information, if you work with ASCII character representation. Please have a look at any ASCII table. You will see that any uppercase and lowercase character differ by 32. Or, if you look in binary:
char dez bin char dez bin
'A' 65 0100 0001 'a' 97 0110 0001
'B' 66 0100 0010 'b' 98 0110 0010
'C' 67 0100 0011 'b' 99 0110 0011
. . .
So, if you already know that a character is alpha, then teh only difference between upper- and lowercase is bit number 5. If we want to know, if char is lowercase, we can get this by masking this bit. c & 0b0010 0000 that is equal to c & 32 or c & 0x20.
If we want to operater on either uppercase or lowercase characters, the we can mask the "case" away. With c & 0b00011111 or c & 31 or c & 0x1F we will get always equivalents for uppercase charcters, already normalized to start with one.
char dez bin Masking char dez bin Masking
'A' 65 0100 0001 & 0x1b = 1 'a' 97 0110 0001 & 0x1b = 1
'B' 66 0100 0010 & 0x1b = 2 'b' 98 0110 0010 & 0x1b = 2
'C' 67 0100 0011 & 0x1b = 3 'b' 99 0110 0011 & 0x1b = 3
. . .
So, if we use an alpha character, mask it, and subtract 1, then we get as a result 0..25 for any upper- or lowercase character.
Additionally, I would like tor repeat the key handling. Positive keys will encrypt a string, negative keys will decrypt a string. But, as said above, negative keys can be transormed into positive ones. Example:
Shifting by -1 is same as shifting by +25
Shifting by -2 is same as shifting by +24
Shifting by -3 is same as shifting by +23
Shifting by -4 is same as shifting by +22
So,it is very obvious that we can calculate an always positive key by: 26 + key. For negative keys, this will give us the above offsets.
And for positve keys, we would have an overflow over 26, which we can elimiate by a modulo 26 division:
'A'--> 0 + 26 = 26 26 % 26 = 0
'B'--> 1 + 26 = 27 27 % 26 = 1
'C'--> 2 + 26 = 28 28 % 26 = 2
'D'--> 3 + 26 = 29 29 % 26 = 3
--> (c + key) % 26 will eliminate overflows and result in the correct new en/decryptd character.
And, if we combine this with the bove wisdom for negative keys, we can write: ((26+(key%26))%26) which will work for all positive and negative keys.
Combining that with that masking, could give us the following program:
const char potentialLowerCaseIndicator = c & 0x20;
const char upperOrLower = c & 0x1F;
const char normalized = upperOrLower - 1;
const int withOffset = normalized + ((26+(key%26))%26);
const int withOverflowCompensation = withOffset % 26;
const char newUpperCaseCharacter = (char)withOverflowCompensation + 'A';
const char result = newUpperCaseCharacter | (potentialLowerCaseIndicator );
Of course, all the above many statements can be converted into one Lambda:
#include <string>
#include <algorithm>
#include <cctype>
#include <iostream>
// Simple function for Caesar encyption/decyption
std::string caesar(const std::string& in, int key) {
std::string res(in.size(), ' ');
std::transform(in.begin(), in.end(), res.begin(), [&](char c) {return std::isalpha(c) ? (char)((((c & 31) - 1 + ((26 + (key % 26)) % 26)) % 26 + 65) | (c & 32)) : c; });
return res;
}
int main() {
std::string test{ "aBcDeF xYzZ" };
std::cout << caesar(test, 5);
}
The last function can also be made more verbose:
std::string caesar1(const std::string& in, int key) {
std::string res(in.size(), ' ');
auto convert = [&](const char c) -> char {
char result = c;
if (std::isalpha(c)) {
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Check and remember if the original character was lower case
const bool originalIsLower = std::islower(c);
// We want towork with uppercase only
const char upperCaseChar = (char)std::toupper(c);
// But, we want to start with 0 and not with 'A' (65)
const int normalized = upperCaseChar - 'A';
// Now add the key
const int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
const int capped = shifted % 26;
// Get back a character
const char convertedUppcase = (char)capped + 'A';
// And set back the original case
result = originalIsLower ? (char)std::tolower(convertedUppcase) : convertedUppcase;
}
return result;
};
std::transform(in.begin(), in.end(), res.begin(), convert);
return res;
}
EDIT
Please see below a solution with only simplest statements.
#include <iostream>
#include <string>
using namespace std;
string caesar(string in, int key) {
// Here we will store the resulting encrypted/decrypted string
string result{};
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Read character by character from the string
for (unsigned int i = 0; i < in.length(); ++i) {
char c = in[i];
// CHeck for alpha character
if ((c >= 'A' and c <= 'Z') or (c >= 'a' and c <= 'z')) {
// Check and remember if the original character was lower case
bool originalIsLower = (c >= 'a' and c <= 'z');
// We want to work with uppercase only
char upperCaseChar = originalIsLower ? c - ('a' - 'A') : c;
// But, we want to start with 0 and not with 'A' (65)
int normalized = upperCaseChar - 'A';
// Now add the key
int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
int capped = shifted % 26;
// Get back a character
char convertedUppcase = (char)capped + 'A';
// And set back the original case
result += originalIsLower ? convertedUppcase + ('a' - 'A') : convertedUppcase;
}
else
result += c;
}
return result;
}
int main() {
string test{ "aBcDeF xYzZ" };
string encrypted = caesar(test, 5);
string decrypted = caesar(encrypted, -5);
cout << "Original: " << test << '\n';
cout << "Encrpyted: " << encrypted << '\n';
cout << "Decrpyted: " << decrypted << '\n';
}

VBA convert Excel Style Column Name (with 52 charset) to original number

I have a c++ program that takes an integer and convert it to lower and uppercase alphabets, similar to what excel does to convert column index to column number but also including lower case letters.
#include <string>
#include <iostream>
#include <climits>
using namespace std;
string ConvertNum(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
size_t const base = sizeof(digits) - 1;
char result[sizeof(unsigned long)*CHAR_BIT + 1];
char* current = result + sizeof(result);
*--current = '\0';
while (v != 0) {
v--;
*--current = digits[v % base];
v /= base;
}
return current;
}
// for testing
int main()
{
cout<< ConvertNum(705);
return 0;
}
I need the vba function to reverse this back to the original number. I do not have a lot of experience with C++ so I can not figure out a logic to reverse this in vba. Can anyone please help.
Update 1: I don't need already written code, just some help in the logic to reverse it. I'll try to convert the logic into code myself.
Update 2: Base on the wonderful explanation and help provided in the answer, it's clear that the code is not converting the number to a usual base52, it is misleading. So I have changed the function name to eliminate the confusion for future readers.
EDIT: The character string format being translated to decimal by the code described below is NOT a standard base-52 schema. The schema does not include 0 or any other digits. Therefore this code should not be used, as is, to translate a standard base-52 value to decimal.
O.K. this is based on converting a single character based on its position in a long string. the string is:
chSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
The InStr() function tells us the A is in position 1 and the Z is in position 26 and that a is in position 27. All characters get converted the same way.
I use this rather than Asc() because Asc() has a gap between the upper and lower case letters.
The least significant character's value gets multiplied by 52^0The next character's value gets multiplied by 52^1The third character's value gets multiplied by 52^3, etc. The code:
Public Function deccimal(s As String) As Long
Dim chSET As String, arr(1 To 52) As String
Dim L As Long, i As Long, K As Long, CH As String
chSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
deccimal = 0
L = Len(s)
K = 0
For i = L To 1 Step -1
CH = Mid(s, i, 1)
deccimal = deccimal + InStr(1, chSET, CH) * (52 ^ K)
K = K + 1
Next i
End Function
Some examples:
NOTE:
This is NOT the way bases are usually encoded. Usually bases start with a 0 and allow 0 in any of the encoded value's positions. In all my previous UDF()'s similar to this one, the first character in chSET is a 0 and I have to use (InStr(1, chSET, CH) - 1) * (52 ^ K)
Gary's Student provided a good and easy to understand way to get the number from what I call "Excel style base 52" and this is what you wanted.
However this is a little different from the usual base 52. I'll try to explain the difference to regular base 52 and its conversion. There might be an easier way but this is the best I could come up with that also explains the code you provided.
As an example: The number zz..zz means 51*(1 + 52 + 52^2 + ... 52^(n-1)) in regular base 52 and 52*(1 + 52 + 52^2 + ... 52^(n-1)) in Excel style base 52. So Excel style get's higher number with fewer digits. Here is how much that difference is based on number of digits. How is this possible? It uses leading zeros so 1, 01, 001 etc are all different numbers. Why don't we do this normally? It would mess up the easy arithmetic of the usual system.
We can't just shift all the digits by one after the base change and we can't just substract 1 before the base change to counter the fact that we start at 1 instead of 0. I'll outline the problem with base 10. If we'd use Excel style base 10 to number the columns, we would have to count like "0, 1, 2, ..., 9, 00, 01, 02, ...". On the first glance it looks like we just have to shift the digits so we start counting at 1 but this only works up to the 10th number.
1 2 .. 10 11 .. 20 21 .. 99 100 .. 110 111 //normal counting
0 1 .. 9 00 .. 09 10 .. 88 89 .. 99 000 //excel style counting
You notice that whenever we add a new digit we shift again. To counter that, we have to do a shift by 1 before calculating each digit, not shift the digit after calculating it. (This only makes a difference if we're at 52^k) Note that we still assign A to 0, B to 1 etc.
Normally what you would do to change bases is looping with something like
nextDigit = x mod base //determining the last digit
x = x/base //removing the last digit
//terminate if x = 0
However now it is
x = x - 1
nextDigit = x mod base
x = x/base
//terminate if x = 0
So x is decremented by 1 first! Let's do a quick check for x=52:
Regular base 52:
nextDigit = x mod 52 //52 mod 52 = 0 so the next digit is A
x = x/52 //x is now 1
//next iteration
nextDigit = x mod 52 //1 mod 52 = 1 so the next digit is B
x = x/52 //1/52 = 0 in integer arithmetic
//terminate because x = 0
//result is BA
Excel style:
x = x-1 //x is now 51
nextDigit = x mod 52 //51 mod 52 = 51 so the next digit is z
x = x/52 //51/52 = 0 in integer arithmetic
//terminate because x=0
//result is z
It works!
Part 2: Your C++ code
Now for let's read your code:
x % y means x mod y
When you do calculations with integers, the result will be an integer which is achieved by rounding down. So 39/10 will produce 3 etc.
x++ and ++x both increment x by 1.
You can use this in other statements to save a line of code. x++ means x is incremented after the statement is evaluated and ++x means it is incremented before the statement is evaluated
y=f(x++);
is the same as
y = f(x);
x = x + 1;
while
y=f(++x);
is the same as
x = x + 1;
y = f(x);
This goes the same way for --
Char* p creates a pointer to a char.
A pointer points to a certain location in memory. If you change the pointer, it points to a different location. E.g. doing p-- moves the pointer one to the left. To read or write the value that is saved at the location, use *p. E.g. *p="a"; "a" is written to the memory location that p points at. *p--="a"; "a" is written to the memory but the pointer is moved to the left afterwards so *p is now whatever is in the memory left of "a".
strings are just arrays of type char.
The end of a string is always '\0' if the computer reads a string it continues until it finds '\0'
This is hopefully enough to understand the code. Here it is
#include <string>
#include <iostream>
#include <climits>
using namespace std;
string base52(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; //The digits. (Arrays start at 0)
size_t const base = sizeof(digits) - 1; //The base, based on the digits that were given
char result[sizeof(unsigned long)*CHAR_BIT + 1]; //The array that holds the answer
//sizeof(unsigned long)*CHAR_BIT is the number of bits of an unsigned long
//which means it is the absolute longest that v can be in any base.
//The +1 is to hold the terminating character '\0'
char* current = result + sizeof(result); //This is a pointer that is supposed to point to the next digit. It points to the first byte after the result array (because its start + length)
//(i.e. it will go through the memory from high to low)
*--current = '\0'; //The pointer gets moved one to the left (to the last char of result and the terminating char is added
//the pointer has to be moved to the left first because it was actually pointing to the first byte after the result.
while (v != 0) { //loop until v is zero (until there are no more digits left.
v--; //v = v - 1. This is the important part that does the 1 -> A part
*--current = digits[v % base]; // the pointer is moved one to the left and the corresponding digit is saved
v /= base; //the last digit is dropped
}
return current; //current is returned, which points at the last saved digit. The rest of the result array (before current) is not used.
}
// for testing
int main()
{
cout<< base52(705);
return 0;
}

How to random flip binary bit of char in C/C++

If I have a char array A, I use it to store hex
A = "0A F5 6D 02" size=11
The binary representation of this char array is:
00001010 11110101 01101101 00000010
I want to ask is there any function can random flip the bit?
That is:
if the parameter is 5
00001010 11110101 01101101 00000010
-->
10001110 11110001 01101001 00100010
it will random choose 5 bit to flip.
I am trying make this hex data to binary data and use bitmask method to achieve my requirement. Then turn it back to hex. I am curious is there any method to do this job more quickly?
Sorry, my question description is not clear enough. In simply, I have some hex data, and I want to simulate bit error in these data. For example, if I have 5 byte hex data:
"FF00FF00FF"
binary representation is
"1111111100000000111111110000000011111111"
If the bit error rate is 10%. Then I want to make these 40 bits have 4 bits error. One extreme random result: error happened in the first 4 bit:
"0000111100000000111111110000000011111111"
First of all, find out which char the bit represents:
param is your bit to flip...
char *byteToWrite = &A[sizeof(A) - (param / 8) - 1];
So that will give you a pointer to the char at that array offset (-1 for 0 array offset vs size)
Then get modulus (or more bit shifting if you're feeling adventurous) to find out which bit in here to flip:
*byteToWrite ^= (1u << param % 8);
So that should result for a param of 5 for the byte at A[10] to have its 5th bit toggled.
store the values of 2^n in an array
generate a random number seed
loop through x times (in this case 5) and go data ^= stored_values[random_num]
Alternatively to storing the 2^n values in an array, you could do some bit shifting to a random power of 2 like:
data ^= (1<<random%7)
Reflecting the first comment, you really could just write out that line 5 times in your function and avoid the overhead of a for loop entirely.
You have 32 bit number. You can treate the bits as parts of hte number and just xor this number with some random 5-bits-on number.
int count_1s(int )
{
int m = 0x55555555;
int r = (foo&m) + ((foo>>>1)&m);
m = 0x33333333;
r = (r&m) + ((r>>>2)&m);
m = 0x0F0F0F0F;
r = (r&m) + ((r>>>4)&m);
m = 0x00FF00FF;
r = (r&m) + ((r>>>8)&m);
m = 0x0000FFFF;
return r = (r&m) + ((r>>>16)&m);
}
void main()
{
char input[] = "0A F5 6D 02";
char data[4] = {};
scanf("%2x %2x %2x %2x", &data[0], &data[1], &data[2], &data[3]);
int *x = reinterpret_cast<int*>(data);
int y = rand();
while(count_1s(y) != 5)
{
y = rand(); // let's have this more random
}
*x ^= y;
printf("%2x %2x %2x %2x" data[0], data[1], data[2], data[3]);
return 0;
}
I see no reason to convert the entire string back and forth from and to hex notation. Just pick a random character out of the hex string, convert this to a digit, change it a bit, convert back to hex character.
In plain C:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main (void)
{
char *hexToDec_lookup = "0123456789ABCDEF";
char hexstr[] = "0A F5 6D 02";
/* 0. make sure we're fairly random */
srand(time(0));
/* 1. loop 5 times .. */
int i;
for (i=0; i<5; i++)
{
/* 2. pick a random hex digit
we know it's one out of 8, grouped per 2 */
int hexdigit = rand() & 7;
hexdigit += (hexdigit>>1);
/* 3. convert the digit to binary */
int hexvalue = hexstr[hexdigit] > '9' ? hexstr[hexdigit] - 'A'+10 : hexstr[hexdigit]-'0';
/* 4. flip a random bit */
hexvalue ^= 1 << (rand() & 3);
/* 5. write it back into position */
hexstr[hexdigit] = hexToDec_lookup[hexvalue];
printf ("[%s]\n", hexstr);
}
return 0;
}
It might even be possible to omit the convert-to-and-from-ASCII steps -- flip a bit in the character string, check if it's still a valid hex digit and if necessary, adjust.
First randomly chose x positions (each position consist of array index and the bit position).
Now if you want to flip ith bit from right for a number n. Find the remainder of n by 2n as :
code:
int divisor = (2,i);
int remainder = n % divisor;
int quotient = n / divisor;
remainder = (remainder == 0) ? 1 : 0; // flip the remainder or the i th bit from right.
n = divisor * quotient + remainder;
Take mod 8 of input(5%8)
Shift 0x80 to right by input value (e.g 5)
XOR this value with (input/8)th element of your character array.
code:
void flip_bit(int bit)
{
Array[bit/8] ^= (0x80>>(bit%8));
}

Write a program using bitwise operators to determine positive remaindar

Write a program that reads an integer value from the keyboard into a variable of type int, and uses one of the bitwise operators (i.e. not the % operator!) to determine the positive remainder when divided by 8.
For example, 29 = (3x8)+5 and 14 = ( 2x8)+2 have positive remainder 5 and 2, respectively, when divided by 8.
I tried to search how can I solve it. What I did is to break given examples numbers into binary.
29 => 101001
8 => 001000
5 => 000101
I don't know what is operation I should do with 29 and 8 to get result 5 in binary.
While searching there's some guys said that we should do (& operation with 7 )
remainder = remainder & 7 ;
Then I tried to do this with Value itself
value = value & 7 ;
and Here's my code After doing it ...
#include <iostream>
using std::cout;
using std::endl;
using std::cin;
int main()
{
int value = 0;
int divisor = 8;
int remainder = 0;
cout << "Enter an integr and I'll divide it by 8 and give you the remainder!"
<<endl;
cin >> value;
value = value & 7;
remainder = value & divisor;
cout << remainder;
return 0;
}
It gave me result 0 when I use value 29. I don't know what I wrote was right or not.
Simply & the number itself with 7. Also, 29 = 0b11101. To generalise, the remainder when divided by a number 2 ^ n is found by &ing it with (2 ^ n) - 1 (^ == power of)
Thus, to obtain remainder modulo 16, & with 15, and so on.
Since 8 is exactly 2^3, the modulo-8 remainder of any number is composed of its last three binary digits, i. e. it equals the number bitwise-and 7:
unsigned rem8 = number & 7;
(7 is 111 in binary, that's why.)