Shift Cipher Gives Funny Characters. Related to ASCII shifting? - c++

#include <iostream>
using namespace std;
const int MAXLEN = 50; // max length for input sequence of characters
const int MAXKEY = 10; // max length for key
// Shift Cipher Function. Use of ASCII Table for algorithm. (PROBLEMATIC)
char Shift(char chr, int k)
{
char c = chr;
if (c >= 'a' && c <= 'z')
{
c = c + k - 32; // For Uppercase
if (c < 65)
{
c = c + 26; // A-1 must be Z
}
else if (c > 90)
{
c = c - 26; // Z+1 must be A
}
return c;
}
else if (c >= 'A' && c <= 'Z')
{
c = c + k + 32; // + 32 to convert it to Lower case
if (c < 97)
{
c = c + 26; // a-1 must be z
}
else if (c > 122)
{
c = c - 26; // z+1 must be a
}
return c;
}
else
return c;
}
int main()
{
// to store user inputs
char s; // 'e' for encryption, 'd' for decryption
char text[MAXLEN]; // the sequence of characters to encrypt/decrypt
char key[MAXKEY]; // the key
char c, v, ch;
int n, cn = 0;
cin >> s;
// For the first input
for (int i = 0; i < 51; ++i)
{
cin >> c;
if (c != '!')
{
text[i] = c;
cn++;
}
else
{
break;
}
}
// For the second input
cin >> n;
for (int i = 0; i < n; ++i)
{
cin >> v;
key[i] = v;
}
// Execution
for (int i = 0; i < 51; ++i)
{
if (s == 'd')
{
ch = Shift(text[i], -key[i]); // 2->1 Logic
cout << ch;
}
else if (s == 'e')
{
ch = Shift(text[i], key[i]);
cout << ch;
}
if (ch != ' ')
{
cout << ch;
}
}
return 0;
}
Hi everyone. I am trying to solve a shift cipher problem but I do not know how to deal with the funny character output.
We first consider the algorithm to encrypt/decrypt a single character:
To encrypt (decrypt) a letter c (within the alphabet A-Z or a-z) with a shift of k positions:
Let x be c's position in the alphabet (0 based), e.g., the position of B is 1 and position of g is 6.
For encryption, calculate y = x + k modulo 26;
for decryption, calculate y = x − k modulo 26.
Let w be the letter corresponding to position y in the alphabet. If c is in
uppercase, the encrypted (decrypted) letter is w in lowercase; otherwise, the
encrypted (decrypted) letter is w in uppercase.
A character which is not within the alphabet A-Z or a-z will remain unchanged under
encryption or decryption.
Example. Given letter B and k = 3, we have x = 1, y = 1 + 3 mod 26 = 4, and w = E . As B
is in uppercase, the encrypted letter is e .
Now, to encrypt/decrypt a sequence of characters:
The number of positions, k, used to shift a character is determined by a key V of n
characters. For example, if V is a 4-character key 'C', 'O', 'M', 'P', and their positions in the
alphabet is 2, 14, 12 and 15, respectively. To encrypt a sequence of characters, we shift the
first character by +2 positions, the second by +14, the third by +12, the fourth by +15 and
repeat the key, i.e., we shift the fifth character by +2, the sixth by +14, until we encrypt all the
characters in the input sequence.
This is the intended I/O:
Input:
e !
3 A B C
Output:
!
But the output is problematic:
≡≡↨↨╧╧??÷÷00αα‼‼▓▓╘╘zz☻☻ §§ll??÷÷☺☺
So I suspect that the problem is in the Shift function but not the int main() ones. Yet I am not sure how.

Related

Issues with Caesar Cipher not decrypting correctly in C++

I am aware that this question has been asked a few times, and I may have missed the question that answers my specific problem however I cannot seem to find one that gives me an answer that works for me.
When I am decrypting a Caesar Cipher it doesn't seem to wrap around correctly, my code seems to follow the specific mathmatics for the caesar cipher but it seems to return junk output when it's supposed to wrap around. My code is as follows, including a system I used to test the problem.
#include "main.h"
#include <QCoreApplication>
#include <QDebug>
String caesarCipher(QString in, int shift, bool decrypt)
/*
* Caesar shift is mathmatically represented as e = (q + s) mod 26
* Decryption is represented as d = (q - s) mod 26
* ROT13 is a caesar shift with 13 shift
*/
{
QString out;
if (!decrypt)
{
for (int i = 0; i < in.length(); ++i)
{
if (in[i] >= 'a' && in[i] <= 'z')
{
int q = (in[i].unicode() - 'a');
int e = (q + shift) % 26;
out += e + 'a';
}
else if (in[i] >= 'A' && in[i] <= 'Z')
{
int q = (in[i].unicode() - 'A');
int e = (q + shift) % 26;
out += e + 'A';
}
else
out += in[i];
}
return out;
}
else
{
for (int i = 0; i < in.length(); ++i)
{
if (in[i] >= 'a' && in[i] <= 'z')
{
int q = (in[i].unicode() - 'a');
int d = (q - shift) % 26;
int r = d + 'a';
out += r;
}
else if (in[i] >= 'A' && in[i] <= 'Z')
{
int q = (in[i].unicode() - 'A');
int d = (q - shift) % 26;
int r = d + 'A';
out += r;
}
else
out += in[i];
}
return out;
}
}
int main() // Testing
{
QString testString = "abcdefghijklmnopqrstuvwxyz";
QString upperTest = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const int shifting = 3;
qDebug() << "Test String: " << testString;
qDebug() << "Test String (Upper): " << upperTest;
{
QString e = caesarCipher(testString, shifting, false);
QString E = caesarCipher(upperTest, shifting, false);
QString d = caesarCipher(e, shifting, true);
QString D = caesarCipher(E, shifting, true);
qDebug() << "Shift amount: " << shifting;
qDebug() << "Encrypt (Lower): " << e;
qDebug() << "Encrypt (Upper): " << E;
qDebug() << "Decrypt (Lower): " << d;
qDebug() << "Decrypt (Upper): " << D;
}
return 0;
}
The expected result is
Test String: "abcdefghijklmnopqrstuvwxyz"
Test String (Upper): "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
Shift amount: 3
Encrypt (Lower): "defghijklmnopqrstuvwxyzabc"
Encrypt (Upper): "DEFGHIJKLMNOPQRSTUVWXYZABC"
Decrypt (Lower): "abcdefghijklmnopqrstuvwxyz"
Decrypt (Upper): "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
Press <RETURN> to close this window...
The result I get:
Test String: "abcdefghijklmnopqrstuvwxyz"
Test String (Upper): "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
Shift amount: 3
Encrypt (Lower): "defghijklmnopqrstuvwxyzabc"
Encrypt (Upper): "DEFGHIJKLMNOPQRSTUVWXYZABC"
Decrypt (Lower): "abcdefghijklmnopqrstuvw^_`"
Decrypt (Upper): "ABCDEFGHIJKLMNOPQRSTUVW>?#"
Press <RETURN> to close this window...
I have tried to move code around, change where the shift is removed, where the modulo is done and where the 'a' character is added
For reference the code was initially before I altered it for readability:
for (int i = 0; i < in.length(); ++i)
{
if (in[i] >= 'a' && in[i] <= 'z')
out.resultString += (((in[i].unicode() - 'a') - shift) % m) + 'a';
else if (in[i] >= 'A' && in[i] <= 'Z')
out.resultString += (((in[i].unicode() - 'A') - shift) % m) + 'A';
else
out.resultString += in[i];
}
The % operator can return negative results when used with negative numbers. In your case, when decrypting an 'a', q will be 0, d will be (-3 % 26), which can be -3.
The solution is to ensure the number is positive before calculating the remainder:
int d = (q - shift + 26) % 26;
Or, if the shift amount is unknown, or can be more than 25, check if d is negative and add 26 to it after your initial calculation.

How to cyclically increment and decrement 26 latin characters in a loop

I want to increment or decrement characters, but have them cycle back to a when going beyond z and to z when going before a.
For example incrementing 'w' by 2 gives 'y' and decrementing 'w' by 2 gives 'u'.
Another example decrementing 'w' by 28 gives 'u' and decrementing 'a' by 256 gives 'e'.
I've figure out how to increment: char(int(A[i]+B-97)%26 +97) where B is the shift amount and A[i] is current character.
Don't overcomplicate. Use modulo to keep the increment or decrement amount in a range of 26 characters, then simply do a range check:
char cyclicIncrementDecrement(char ch, int amount)
{
int newValue = int(ch) + (amount % 26);
if (newValue < 'a') newValue += 26;
if (newValue > 'z') newValue -= 26;
return char(newValue);
}
This method of course assumes ch already is in range of 'a' to 'z'. If not, you need to handle that (put it in range or throw an exception or whatever is appropriate for your application).
Running this:
int main()
{
std::cout << cyclicIncrementDecrement('w', -2) << std::endl;
std::cout << cyclicIncrementDecrement('w', 2) << std::endl;
std::cout << cyclicIncrementDecrement('w', -28) << std::endl;
std::cout << cyclicIncrementDecrement('a', -256) << std::endl;
std::cout << cyclicIncrementDecrement('z', -256) << std::endl;
std::cout << cyclicIncrementDecrement('z', -51) << std::endl;
std::cout << cyclicIncrementDecrement('z', -52) << std::endl;
}
gives:
u
y
u
e
d
a
z
Using modular arithmetic, calculate your answer as modulo 26 and then add 'a' (ASCII 97) to your result.
char cyclic_increment(char ch, int n) {
int tmp = ((ch - 97) + n) % 26;
if (tmp < 0 )
tmp += 26;
return (char)(tmp + 97);
}
Alternatively, you could write the above (without an if) as:
char cyclic_increment(char ch, int n) {
return (((ch - 'a') + n) % 26 + 26) % 26 + 'a';
}
This handles both positive and negative offsets:
unsigned char az_cyclic(int in_ch)
{
constexpr int mod = 26; // There are 26 letters in the English alphabet
int offset = (in_ch - 'a') % mod; // (ASCII To zero-based offset) Mod_26 remainder
if (offset < 0) // If negative offset,
offset += mod; // normalize to positive. For example: -1 to 25
return 'a' + offset; // Normalize to ASCII
}
Use-cases:
int main()
{
unsigned char out_ch = '\0';
out_ch = az_cyclic('a' - 1); // 'z'
out_ch = az_cyclic('a' - 1 - 26); // 'z'
out_ch = az_cyclic('a' - 2); // 'y'
out_ch = az_cyclic('a' + 4); // 'e'
out_ch = az_cyclic('a' + 4 + 26); // 'e'
out_ch = az_cyclic('a' + 2); // 'c'
return 0;
}

Caesar cipher : how to calculate with shifting value > 10 ( or larger )?

As i know , the " formula " of Caesar Shifting is (x + k ) % 26 , where k is the shifting value and decryption just replace " + " to " - ".
but my code does not work when k > 10 (after i tested k = 10 , i find that the "shift" of the first few characters is wrong, so I estimate that k > 10 will be wrong (the number of incorrect characters increase) as well. ). I first change the characters to ASCII and then do the calculation. Finally change it back to characters.
Here are my code.
#include <iostream>
#include <string>
using namespace std;
int main() {
string target;
char s;
int k, i, num, length, j;
cin >> s >> k;
getline(cin, target);
for (j = 0; j <= (int)target.length(); j++) {
if ((target[j]) = ' ') {
target.erase(j, 1);
}
}
length = (int)target.length();
if (s == 'e') {
for (num = 0; num <= length; num++) {
if (isupper(target[num]))
target[num] = tolower(char(int(target[num] + k - 65) % 26 + 65));
else if (islower(target[num]))
target[num] = toupper(char(int(target[num] + k - 97) % 26 + 97));
}
}
else if (s == 'd') {
for (num = 0; num <= length; num++) {
if (isupper(target[num]))
target[num] = tolower(char(int(target[num] - k - 65) % 26 + 65));
else if (islower(target[num]))
target[num] = toupper(char(int(target[num] - k - 97) % 26 + 97));
}
}
cout << target;
return 0;
}
Let me put down the case which i failed to run.
input:
d 10 n 3 V 3 D 3 N _ M Y N 3 _ S C _ N 3 L E ( input d / e first, then shifting value, finally the sequence of string require to " change ", the space is required to delete. )
the expected output:
D3l3t3d_cod3_is_d3bu
my output:
D3l3:3d_cod3_i9_d3b;
Thanks!
Your issue is that when decoding you end up with negative numbers. With k == 13 the expression 'T' - k - 65 gives -7. -7 % 26 is still -7. -7+65 is 58 which isn't a letter.
You can avoid negative numbers by simply setting k to 26 - k when decoding.
Your code then simplifies to:
if (s == 'd') {
k = 26 - k;
}
for (num = 0; num <= length; num++) {
if (isupper(target[num]))
target[num] = tolower(char(int(target[num] + k - 'A') % 26 + 'A'));
else if (islower(target[num]))
target[num] = toupper(char(int(target[num] + k - 'a') % 26 + 'a'));
}
Note I've replaced your integer constants with their equivalent characters which makes the code much easier to understand.
Note you also have a bug in your first loop (target[j]) = ' ' should be (target[j]) == ' '.
Using all c++ has to offer you can reduce your code to:
#include <iostream>
#include <string>
#include <algorithm>
int main() {
std::string target = "mXLM";
char s = 'e';
int k = 7;
target.erase(std::remove(target.begin(), target.end(), ' '), target.end());
if (s == 'd') {
k = 26 - k;
}
std::string result;
std::transform(target.begin(), target.end(), std::back_inserter(result), [k](char in) {
if (isalpha(in)) {
char inputOffset = isupper(in) ? 'A' : 'a';
char outputOffset = isupper(in) ? 'a' : 'A';
return char(int(in + k - inputOffset) % 26 + outputOffset);
}
return in;
});
std::cout << result;
return 0;
}

Optimizing Hexadecimal To Ascii Function in C++

This is a function in c++ that takes a HEX string and converts it to its equivalent ASCII character.
string HEX2STR (string str)
{
string tmp;
const char *c = str.c_str();
unsigned int x;
while(*c != 0) {
sscanf(c, "%2X", &x);
tmp += x;
c += 2;
}
return tmp;
If you input the following string:
537461636b6f766572666c6f77206973207468652062657374212121
The output will be:
Stackoverflow is the best!!!
Say I were to input 1,000,000 unique HEX strings into this function, it takes awhile to compute.
Is there a more efficient way to complete this?
Of course. Look up two characters at a time:
unsigned char val(char c)
{
if ('0' <= c && c <= '9') { return c - '0'; }
if ('a' <= c && c <= 'f') { return c + 10 - 'a'; }
if ('A' <= c && c <= 'F') { return c + 10 - 'A'; }
throw "Eeek";
}
std::string decode(std::string const & s)
{
if (s.size() % 2) != 0) { throw "Eeek"; }
std::string result;
result.reserve(s.size() / 2);
for (std::size_t i = 0; i < s.size() / 2; ++i)
{
unsigned char n = val(s[2 * i]) * 16 + val(s[2 * i + 1]);
result += n;
}
return result;
}
Just since I wrote it anyway, this should be fairly efficient :)
const char lookup[32] =
{0,10,11,12,13,14,15,0,0,0,0,0,0,0,0,0,0,1,2,3,4,5,6,7,8,9,0,0,0,0,0,0};
std::string HEX2STR(std::string str)
{
std::string out;
out.reserve(str.size()/2);
const char* tmp = str.c_str();
unsigned char ch, last = 1;
while(*tmp)
{
ch <<= 4;
ch |= lookup[*tmp&0x1f];
if(last ^= 1)
out += ch;
tmp++;
}
return out;
}
Don't use sscanf. It's a very general flexible function, which means its slow to allow all those usecases. Instead, walk the string and convert each character yourself, much faster.
This routine takes a string with (what I call) hexwords, often used in embedded ECUs, for example "31 01 7F 33 38 33 37 30 35 31 30 30 20 20 49" and transforms it in readable ASCII where possible.
Transforms by taking care of the discontuinity in the ASCII table (0-9: 48-57, A-F:65 - 70);
int i,j, len=strlen(stringWithHexWords);
char ascii_buffer[250];
char c1, c2, r;
i=0;
j=0;
while (i<len) {
c1 = stringWithHexWords[i];
c2 = stringWithHexWords[i+1];
if ((int)c1!=32) { // if space found, skip next section and bump index only once
// skip scary ASCII codes
if (32<(int)c1 && 127>(int)c1 && 32<(int)c2 && 127>(int)c2) {
//
// transform by taking first hexdigit * 16 and add second hexdigit
// both with correct offset
r = (char) ((16*(int)c1+((int)c2<64?((int)c2-48):((int)c2-55))));
if (31<(int)r && 127>(int)r)
ascii_buffer[j++] = r; // check result for readability
}
i++; // bump index
}
i++; // bump index once more for next hexdigit
}
ascii_bufferCurrentLength = j;
return true;
}
The hexToString() function will convert hex string to ASCII readable string
string hexToString(string str){
std::stringstream HexString;
for(int i=0;i<str.length();i++){
char a = str.at(i++);
char b = str.at(i);
int x = hexCharToInt(a);
int y = hexCharToInt(b);
HexString << (char)((16*x)+y);
}
return HexString.str();
}
int hexCharToInt(char a){
if(a>='0' && a<='9')
return(a-48);
else if(a>='A' && a<='Z')
return(a-55);
else
return(a-87);
}

CString Hex value conversion to Byte Array

I have been trying to carry out a conversion from CString that contains Hex string to a Byte array and have been
unsuccessful so far. I have looked on forums and none of them seem to help so far. Is there a function with just a few
lines of code to do this conversion?
My code:
BYTE abyData[8]; // BYTE = unsigned char
CString sByte = "0E00000000000400";
Expecting:
abyData[0] = 0x0E;
abyData[6] = 0x04; // etc.
You can simply gobble up two characters at a time:
unsigned int value(char c)
{
if (c >= '0' && c <= '9') { return c - '0'; }
if (c >= 'A' && c <= 'F') { return c - 'A' + 10; }
if (c >= 'a' && c <= 'f') { return c - 'a' + 10; }
return -1; // Error!
}
for (unsigned int i = 0; i != 8; ++i)
{
abyData[i] = value(sByte[2 * i]) * 16 + value(sByte[2 * i + 1]);
}
Of course 8 should be the size of your array, and you should ensure that the string is precisely twice as long. A checking version of this would make sure that each character is a valid hex digit and signal some type of error if that isn't the case.
How about something like this:
for (int i = 0; i < sizeof(abyData) && (i * 2) < sByte.GetLength(); i++)
{
char ch1 = sByte[i * 2];
char ch2 = sByte[i * 2 + 1];
int value = 0;
if (std::isdigit(ch1))
value += ch1 - '0';
else
value += (std::tolower(ch1) - 'a') + 10;
// That was the four high bits, so make them that
value <<= 4;
if (std::isdigit(ch2))
value += ch1 - '0';
else
value += (std::tolower(ch1) - 'a') + 10;
abyData[i] = value;
}
Note: The code above is not tested.
You could:
#include <stdint.h>
#include <sstream>
#include <iostream>
int main() {
unsigned char result[8];
std::stringstream ss;
ss << std::hex << "0E00000000000400";
ss >> *( reinterpret_cast<uint64_t *>( result ) );
std::cout << static_cast<int>( result[1] ) << std::endl;
}
however take care of memory management issues!!!
Plus the result is in the reverse order as you would expect, so:
result[0] = 0x00
result[1] = 0x04
...
result[7] = 0x0E