Two parts of my decoding program aren't working as expected - c++
I came here for help(with a more specific part) accouple days ago, but the solution I got didn't quite work. Basically I'm writing a program that serves 3 purposes: decodes a Rot 13 cypher, decodes a Rot 6 cypher, and put user input through an equation "x=2n-1" where n is the user input.
Rot 13 works fine, but Rot 6 outputs gibberish, and the equation outputs a letter (putting "8" gives you o instead of 15)
I know that this could be done in less functions, and I probably don't need an list, but this is for an assignment, and I need them
I know that I am not great at this, but any help would be great
#include <iostream>
#include <string>
#include <list>
#include <array>
using namespace std;
string coffeeCode(string input) { //Coffee code= 2n-1 where n=a number in a string
double index{};
input[index] = 2*input[index]-1;
return input;
};
string rot6(string input) {
int lower[] = { 'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z',' ' };
int upper[] = { 'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z' };
int inputSize = input.size(); // rot 6 rotates letters so that a{0}->g[6]
int index{}; // m[12]->s[18]
// n->t
while (index != inputSize) { // z->f
if (input[index] >= lower[0] && input[index] <= lower[19])
input[index] = input[index] + 6;
else if (input[index] >= lower[20] && input[index] <= lower[25])
input[index] = input[index] - 20;
else if (input[index] >= upper[0] && input[index] <= upper[19])
input[index] = input[index] + 6;
else if (input[index] <= upper[20] && input[index] <= upper[25])
input[index] = input[index] - 20;
index++;
}
return input;
}
string rot13(string input) { //Decodes into rot 13
int lower[] = { 'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z',' ' };
int upper[] = { 'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z' };
int inputSize = input.size();
int index{};
while (index != inputSize) {
if (input[index] >= lower[0] && input[index] <= lower[12])
input[index] = input[index] + 13;
else if (input[index] >= lower[13] && input[index] <= lower[25])
input[index] = input[index] - 13;
else if (input[index] >= upper[0] && input[index] <= upper[12])
input[index] = input[index] + 13;
else if (input[index] <= upper[13] && input[index] <= upper[25])
input[index] = input[index] - 13;
index++;
}
return input;
}
int main() {
string plaintext;
string ans13;
string ans6;
string ansCoffee;
cout << "Whats the message Spy Guy: ";
getline(cin, plaintext);
ans13 = rot13(plaintext);
ans6 = rot6(plaintext);
ansCoffee = coffeeCode(plaintext);
cout << "One of these is your decoded message" << endl << "In Rot 13: " << ans13 << endl << "In Rot 6: " << ans6 << endl
<< "In Coffee Code: " << ansCoffee << endl;
return 0;
}
Your best chance at understanding what you are doing is reading about how c++ handles characters, and what ASCII is.
[http://www.asciitable.com/]
Just a quick summary:
The ASCII table assigns every character in the english alphabet a number(or a code), and that number is what C++ actually stores in memory.
So, when you do char c = 'a' that is equivalent to doing char c = 97.
The ASCII table is pretty organised too, so all capital letters are in alphabetical order starting from 65(which is A) to 90(which is Z). The same is true for non capital letters, which go from 97 to 122, and for digits, which go from 48 to 57.
This can be used to determine what kind of character a variable is:
if ('a' <= input[index] && input[index] <= 'z') {
// It's lower case
}
if('A' <= input[index] && input[index] <= 'Z') {
// It's upper case
}
Note that when you put a singe character in single quotes the compiler will replace it with it's ASCII code, so you don't actually need to memorize the table. Think about how you could construct an if to determine whether a character is a digit.
Here is how I would implement the rot6. It may not bet the best, but I think it's alright.
string rot6(string input) {
for (int index = 0; index < input.size(); ++ index) {
if ('a' <= input[index] && input[index] <= 'z') { // It's lower case
input[index] = 'a' + ((input[index] - 'a') + 6) % 26;
}
else if ('A' <= input[index] && input[index] <= 'Z') { // It's upper case
input[index] = 'A' + ((input[index] - 'A') + 6) % 26;
}
else if ('0' <= input[index] && input[index] <= '9') { // It's a digit
input[index] = '0' + ((input[index] - '0') + 6) % 10;
}
else { // It's an error
return "Error, bad input!";
}
}
return input;
}
Related
Affine cipher decryption, output differs for upper case and lower case
I have the problem when decrypting a plaintext using Affine cipher. Encryption works fine, but applying the same logic for decryption of lower case/upper case characters returns different output. Here is the output: Encrypted Message is : ulctkbsjarizqhypgxofwnevmd ULCTKBSJARIZQHYPGXOFWNEVMD Decrypted Message is: opqrstuvwxyzabcdefghijklmn ABCDEFGHIJKLMNOPQRSTUVWXYZ I suspect it has something to do with retrieving of ASCII values, can someone correct me? Here is my code: #include<bits/stdc++.h> using namespace std; //Key values of a and b const int a = 17; const int b = 20; string encryptMessage(string plainText) { string cipher = ""; for (int i = 0; i < plainText.length(); i++) { if(plainText[i]!=' ') { if ((plainText[i] >= 'a' && plainText[i] <= 'z') || (plainText[i] >= 'A' && plainText[i] <= 'Z')) { if (plainText[i] >= 'a' && plainText[i] <= 'z') { cipher = cipher + (char) ((((a * (plainText[i]-'a') ) + b) % 26) + 'a'); } else if (plainText[i] >= 'A' && plainText[i] <= 'Z') { cipher = cipher + (char) ((((a * (plainText[i]-'A') ) + b) % 26) + 'A'); } } else { cipher += plainText[i]; } } else { cipher += plainText[i]; } } return cipher; } string decryptCipher(string cipher) { string plainText = ""; int aInverse = 0; int flag = 0; for (int i = 0; i < 26; i++) { flag = (a * i) % 26; if (flag == 1) { aInverse = i; } } for (int i = 0; i < cipher.length(); i++) { if(cipher[i] != ' ') { if ((cipher[i] >= 'a' && cipher[i] <= 'z') || (cipher[i] >= 'A' && cipher[i] <= 'Z')) { if (cipher[i] >= 'a' && cipher[i] <= 'z') { plainText = plainText + (char) ((((aInverse * (cipher[i]+ 'a') ) - b) % 26) + 'a'); } else if (cipher[i] >= 'A' && cipher[i] <= 'Z') { plainText = plainText + (char) (((aInverse * ((cipher[i]+'A' - b)) % 26)) + 'A'); } } else { plainText += cipher[i]; } } else plainText += cipher[i]; } return plainText; } //Driver Program int main(void) { string msg = "abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ"; //Calling encryption function string cipherText = encryptMessage(msg); cout << "Encrypted Message is : " << cipherText<<endl; //Calling Decryption function cout << "Decrypted Message is: " << decryptCipher(cipherText); return 0; }
I have been thinking about this for some time and, although I can't provide a complete explanation, I have a couple of 'observations' that may be useful, plus a 'cheat' workaround. First, although you say you use "the same logic for decryption of lower case/upper case," a character-wise alignment of the code from each of your decryption blocks shows that this isn't quite true: plainText = plainText + (char) ((((aInverse * (cipher[i]+ 'a') ) - b) % 26) + 'a'); // Lower case plainText = plainText + (char) (((aInverse * ((cipher[i]+'A' - b)) % 26)) + 'A'); // Upper case So, 'fixing' the lower case code to be exactly analogous to the (working) code for upper case (and removing redundant parentheses) gives this: if (cipher[i] >= 'a' && cipher[i] <= 'z') { plainText = plainText + (char)( ( (aInverse * (cipher[i] + 'a' - b) ) % 26 ) + 'a' ); } else if (cipher[i] >= 'A' && cipher[i] <= 'Z') { plainText = plainText + (char)( ( (aInverse * (cipher[i] + 'A' - b) ) % 26 ) + 'A' ); } However, this doesn't actually resolve the issue (it just changes it slightly), as the output then is as follows: Encrypted Message is : ulctkbsjarizqhypgxofwnevmd ULCTKBSJARIZQHYPGXOFWNEVMD Decrypted Message is : qrstuvwxyzabcdefghijklmnop ABCDEFGHIJKLMNOPQRSTUVWXYZ The problem here is that the lowercase values are all 'rotated' by the value 16 - which looks suspiciously close to the value for a. Also, note that, although a is used in the encryption formula, it is not used in your decryption. So, I have come up with the following workaround, assuming that (for reasons yet to be deduced) when decoding upper case values, this 16 is somehow lost in the bit-representation of the ASCII values: if ((cipher[i] >= 'a' && cipher[i] <= 'z') || (cipher[i] >= 'A' && cipher[i] <= 'Z')) { int offset = ((cipher[i] - 'A') / 26) ? a - 1 : 0; if (cipher[i] >= 'a' && cipher[i] <= 'z') { plainText = plainText + (char)( ( (aInverse * (cipher[i] + 'a' - b) - offset ) % 26 ) + 'a' ); } else if (cipher[i] >= 'A' && cipher[i] <= 'Z') { plainText = plainText + (char)( ( (aInverse * (cipher[i] + 'A' - b) - offset ) % 26 ) + 'A' ); } } Note that your code can be further simplified/clarified using functions provided by the standard library and removing some 'redundant' checks: for (int i = 0; i < cipher.length(); i++) { if (isalpha(cipher[i])) { int offset = islower(cipher[i]) ? a - 1 : 0; int letter = islower(cipher[i]) ? 'a' : 'A'; plainText = plainText + (char)(((aInverse * (cipher[i] + letter - b) - offset) % 26) + letter); } else { plainText += cipher[i]; } }
How to obtain the whole integers from file has strings and integers and store them into array in C++?
I want to obtain integers from file that has strings too, and store them into array to do some operation on them. the integers can be 1 or 12 or 234, so 3 digits. I am trying to do that but the output stops when I run the code void GetNumFromFile (ifstream &file1, char & contents) { int digits[20]; file1.get(contents); while(!file1.eof()) { for (int n = 0; n < 10; n++) { if(('0' <= contents && contents <= '9') && ('0' >= contents+1 && contents+1 > '9')); digits[n]=contents; if(('0' <= contents && contents <= '9') && ('0' <= contents+1 && contents+1 < '9')); digits[n]=contents; if(('0' <= contents && contents <= '9') && ('0' <= contents+1 && contents+1 <= '9') && ('0' <= contents+2 && contents+2 < '9')); digits[n]=contents; } continue; } for (int i = 0; i <= 20; i++) { cout << *(digits + i) << endl; } }
You have to deal with the number of digits of the number found: int digits[20]; int i = 0; short int aux[3]; // to format each digit of the numbers ifstream file1("filepath"); char contents; file1.get(contents); //first char if (!file1.eof()) //test if you have only one char in the file { while (!file1.eof() && i < 20) // limit added to read only 20 numbers { if (contents <= '9' && contents >= '0') // if character is in number range { aux[0] = contents - '0'; // converting the char to the right integer file1.get(contents); if (contents <= '9' && contents >= '0') // if contents is number, continue on { aux[1] = contents - '0'; if (!file1.eof()) // if has mor char to read, continue on { file1.get(contents); if (contents <= '9' && contents >= '0') // if is integer, continue on { aux[2] = contents - '0'; file1.get(contents); // will read same of last char if eof, but will have no effect at all //aux[0] *= 100; // define houndred //aux[1] *= 10; // define ten digits[i++] = (aux[0] * 100) + (aux[1] * 10) + aux[2]; } else { //aux[0] *= 10; // define ten digits[i++] = (aux[0] * 10) + aux[1]; } } else { digits[i++] = (aux[0] * 10) + aux[1]; } } else { digits[i++] = aux[0]; } } } } else if (contents <= '9' && contents >= '0' && i < 20) // check if the only one char is number { digits[i++} = contents - '0'; } If you want read an undefined size number, then you will have to allocate memory to format each digit of the numers with new (c++) or malloc(c/c++).
First observation: you iterate out of bounds of the array: int digits[20]; for (int i = 0; i <= 20; i++) 20 elements and 21 iteration. That is an undefined behavior, so everything is possible here (if your program eventually gets here). Next, you read from file once and then you have an infinite loop because the expression !file1.eof() is either true or false for the rest of the program run. Isn't that the reason of "output stops"? The third finding: your if statements are useless because of the semicolon after the statement: if(('0' <= contents && contents <= '9') && ('0' >= contents+1 && contents+1 > '9')); digits[n]=contents; You just assign digits[n]=contents; without any check. I neither see any reason of providing a reference to char in the function. Why not to make it a local variable?
You will need first to add get() functionality inside the loop as well in order to reach end of file. Forthmore try to add a while loop once a char was found to be an integer to continue in asking for the next character. e.g. int digits[20]; int i = 0; ifstream file1("filepath"); char contents; while (!file1.eof()) { file1.get(contents); // get the next character if (contents <= '9' && contents >= '0' && i < 20) // if character is in number range { digits[i++] = contents - '0'; // converting the chat to the right integer file1.get(contents); while (contents <= '9' && contents >= '0' && i < 20) // while is integer continue on { digits[i++] = contents - '0'; file1.get(contents); } } } // do other stuff here
How can I do "rot-13 decode" in MFC?
Well, I want to make a function which have encoding and decoding function. So, I studied about "rot-13 encoding" and solved it like this: char* szTemp = "Hello World"; for (int i = 0; i < strlen(szTemp); i++) { if (szTemp[i] >= 'a' && szTemp[i] <= 'm') szTemp[i] += 13; else if (szTemp[i] >= 'A' && szTemp[i] <= 'M') szTemp[i] += 13; else if (szTemp[i] >= 'n' && szTemp[i] <= 'z') szTemp[i] -= 13; else if (szTemp[i] >= 'N' && szTemp[i] <= 'Z') szTemp[i] -= 13; } MessageBox(szTemp); But it have some error. What is it? Anyone help me!
In MFC, it's all about the CString... CString sTemp = "Hello World"; CString sResult = ""; int nLength = sTemp.GetLength(); char c; for ( int i = 0 ; i < nLength ; ++i ) { c = sTemp[i]; if (c>= 'a' && c<= 'm') c+= 13; else if (c>= 'A' && c<= 'M') c+= 13; else if (c>= 'n' && c<= 'z') c-= 13; else if (c>= 'N' && c<= 'Z') c-= 13; sResult += c; } AfxMessageBox( sResult ); It can also be done by accessing the buffer directly, in which case, you can use almost all of your original code. It looks something like this: CString sTemp = "Hello World"; int nLength = sTemp.GetLength(); // Limit scope of szTemp since it is not usable after // the call to ReleaseBuffer { char* szTemp = sTemp.GetBuffer(); for (int i = 0; i < nLength; i++) { if (szTemp[i] >= 'a' && szTemp[i] <= 'm') szTemp[i] += 13; else if (szTemp[i] >= 'A' && szTemp[i] <= 'M') szTemp[i] += 13; else if (szTemp[i] >= 'n' && szTemp[i] <= 'z') szTemp[i] -= 13; else if (szTemp[i] >= 'N' && szTemp[i] <= 'Z') szTemp[i] -= 13; } sTemp.ReleaseBuffer(); } AfxMessageBox( sTemp ); Hope that helps, D*
Htoi incorrect output at 10 digits
When I input 0x123456789 I get incorrect outputs, I can't figure out why. At first I thought it was a max possible int value problem, but I changed my variables to unsigned long and the problem was still there. #include <iostream> using namespace std; long htoi(char s[]); int main() { cout << "Enter Hex \n"; char hexstring[20]; cin >> hexstring; cout << htoi(hexstring) << "\n"; } //Converts string to hex long htoi(char s[]) { int charsize = 0; while (s[charsize] != '\0') { charsize++; } int base = 1; unsigned long total = 0; unsigned long multiplier = 1; for (int i = charsize; i >= 0; i--) { if (s[i] == '0' || s[i] == 'x' || s[i] == 'X' || s[i] == '\0') { continue; } if ( (s[i] >= '0') && (s[i] <= '9') ) { total = total + ((s[i] - '0') * multiplier); multiplier = multiplier * 16UL; continue; } if ((s[i] >= 'A') && (s[i] <= 'F')) { total = total + ((s[i] - '7') * multiplier); //'7' equals 55 in decimal, while 'A' equals 65 multiplier = multiplier * 16UL; continue; } if ((s[i] >= 'a') && (s[i] <= 'f')) { total = total + ((s[i] - 'W') * multiplier); //W equals 87 in decimal, while 'a' equals 97 multiplier = multiplier * 16UL; continue; } } return total; }
long probably is 32 bits on your computer as well. Try long long.
You need more than 32 bits to store that number. Your long type could well be as small as 32 bits. Use a std::uint64_t instead. This is always a 64 bit unsigned type. If your compiler doesn't support that, use a long long. That must be at least 64 bits.
The idea follows the polynomial nature of a number. 123 is the same as 1*102 + 2*101 + 3*100 In other words, I had to multiply the first digit by ten two times. I had to multiply 2 by ten one time. And I multiplied the last digit by one. Again, reading from left to right: Multiply zero by ten and add the 1 → 0*10+1 = 1. Multiply that by ten and add the 2 → 1*10+2 = 12. Multiply that by ten and add the 3 → 12*10+3 = 123. We will do the same thing: #include <cctype> #include <ciso646> #include <iostream> using namespace std; unsigned long long hextodec( const std::string& s ) { unsigned long long result = 0; for (char c : s) { result *= 16; if (isdigit( c )) result |= c - '0'; else result |= toupper( c ) - 'A' + 10; } return result; } int main( int argc, char** argv ) { cout << hextodec( argv[1] ) << "\n"; } You may notice that the function is more than three lines. I did that for clarity. C++ idioms can make that loop a single line: for (char c : s) result = (result << 4) | (isdigit( c ) ? (c - '0') : (toupper( c ) - 'A' + 10)); You can also do validation if you like. What I have presented is not the only way to do the digit-to-value conversion. There exist other methods that are just as good (and some that are better). I do hope this helps.
I found out what was happening, when I inputted "1234567890" it would skip over the '0' so I had to modify the code. The other problem was that long was indeed 32-bits, so I changed it to uint64_t as suggested by #Bathsheba. Here's the final working code. #include <iostream> using namespace std; uint64_t htoi(char s[]); int main() { char hexstring[20]; cin >> hexstring; cout << htoi(hexstring) << "\n"; } //Converts string to hex uint64_t htoi(char s[]) { int charsize = 0; while (s[charsize] != '\0') { charsize++; } int base = 1; uint64_t total = 0; uint64_t multiplier = 1; for (int i = charsize; i >= 0; i--) { if (s[i] == 'x' || s[i] == 'X' || s[i] == '\0') { continue; } if ( (s[i] >= '0') && (s[i] <= '9') ) { total = total + ((uint64_t)(s[i] - '0') * multiplier); multiplier = multiplier * 16; continue; } if ((s[i] >= 'A') && (s[i] <= 'F')) { total = total + ((uint64_t)(s[i] - '7') * multiplier); //'7' equals 55 in decimal, while 'A' equals 65 multiplier = multiplier * 16; continue; } if ((s[i] >= 'a') && (s[i] <= 'f')) { total = total + ((uint64_t)(s[i] - 'W') * multiplier); //W equals 87 in decimal, while 'a' equals 97 multiplier = multiplier * 16; continue; } } return total; }
Wrapping chars in caesar cipher encode
Can anyone please explain me how this wrapping of chars between a-to-z and A-to-Z happening in Caesar shift code? k %= 26; for(int i = 0; i < n; i++){ int c = s[i]; if(c >= 'a' && c <= 'z'){ c += k; if( c > 'z'){ c = 96 + (c % 122); // wrapping from z to a? } } else if(c >= 'A' && c <= 'Z'){ c += k; if(c > 'Z'){ c = 64 + (c % 90); } } cout << (char)c; } K is amount of shift and c is a char of string s. Is there any better way to do the same?
Lets make a couple changes to the code and it is easier to see what is going on for(int i = 0; i < n; i++){ int c = s[i]; if(c >= 'a' && c <= 'z'){ c += k; if( c > 'z'){ c = 'a' + (c % 'z') - 1; // wrapping from z to a? } } else if(c >= 'A' && c <= 'Z'){ c += k; if(c > 'Z'){ c = 'A' + (c % 'Z') - 1; } } cout << (char)c; } So in c = 'a' + (c % 'z') - 1; if c is larger than z then we mod c by z(122) to get how many characters from a we need to go. The same thing is going on with the upper case letters. I am subtracting one here as we are starting at a instead of the character before a like you original code does.