So I am trying to do a code that will work as a cypher. It will take in the word to cypher as input and output (print) the coded word. The problematic snippet of my code is the for loop.
#include <stdio.h>
#include <ctype.h>
#include <string.h>
int main(void)
{
char input[] = "hello";
printf("hello\n");
printf("ciphertext: ");
for (int i = 0; i < 5; i++)
{
if(isalpha(input[i]))
{
int current = input[i];
int cypher = ((current + 1) % 26 )+current;
char out = (char)cypher;
printf("%c", out);
}
else
{
printf("%c", input[i]);
}
}
printf("\n");
}
The problem that I run into when debugging is that the value that ends up being stored in "out" seems correct, however whn it comes to printing it, it shows somehthing else entirely. I did look up quite a few things that I found on here , such as writing the code as such:
char out = (char)cypher;
char out= cypher + '0';
and so on but to no avail. The output should be ifmmp but rather i get j~rrx
Anything would help! thanks :)
You're getting the correct answer. 105 is the ASCII value 'i'. There is no difference. More precisely, the char type is defined as an integer. On virtually all compilers it is 8 bits in size. So an unsigned char can have a value between 0 and 255; a signed char can have a value between -128 and +127.
So when your out variable has the value 105, it has the value 'i'.
The output of your printf will be:
i
But if you look at the out variable in a debugger, you might see 105, depending on the debugger.
Related
I was trying the Caesar Cipher problem and got stuck at a very beginner like looking bug, but I don't know why my code is behaving that way. I added an integer to a char and expect it to increase in value, but I get a negative number instead. Here is my code. Although I found a way around it, but why does this code behave this way?
#include <iostream>
using std::cout; using std::endl;
int main()
{
char ch ='w';
int temp;
temp = int(ch) + 9;
ch = temp;
cout<<temp<<endl;
cout<<(int)ch;
return 0;
}
Output:
128
-128
A signed char type can typically hold values from -128 to 127.
With a value 128 it overflows.
I need to get the hash (sha1) value of a given unsigned char array. So, I have used openssl. The SHA1 function generate the hash value in an unsigned char array which has 20 values. Indeed each value represent two hexadecimal values.
But, I should convert the generated array (with length of 20) to an array of chars with 40 values.
For example now hashValue[0] is "a0" but, I want to have hashValue[0] = "a" and hashValue[1] = "0"
#include <iostream>
#include <openssl/sha.h> // For sha1
using namespace std;
int main() {
unsigned char plainText[] = "compute sha1";
unsigned char hashValue[20];
SHA1(plainText,sizeof(plainText),hashValue);
for (int i = 0; i < 20; i++) {
printf("%02x", hashValue[i]);
}
printf("\n");
return 0;
}
You could create another array and use sprintf or safer snprintf to print into it instead of the standard output.
Something like this:
#include <iostream>
#include <stdio.h>
#include <openssl/sha.h> // For sha1
using namespace std;
int main() {
unsigned char plainText[] = "compute sha1";
unsigned char hashValue[20];
char output[41];
SHA1(plainText,sizeof(plainText),hashValue);
char *c_output = output;
for (int i = 0; i < 20; i++, c_output += 2) {
snprintf(c_output, 3, "%02x", hashValue[i]);
}
return 0;
}
Now output[0] == 'a' and output[1] == '0'.
There might be other, even better solution, this is just the first that comes to mind.
EDIT: Added fix from comments.
seems like you want to separate the high order and low order bytes.
to isolate the high order byte, shift right 4 bytes.
and to isolate the low order byte, apply a mask. AND with 0x0f
int x = 0x3A;
int y = x >> 4; // get high order nibble
int z = x & 0x0F; // get low order nibble
printf("%02x\n", x);
printf("%02x\n", y);
printf("%02x\n", z);
I'm writing a C++ code that converts an unsigned base 10 integer to any other base between 2 and 36. I haven't coded in a while so I'm kind of re-learning everything. My questions are: how can I keep it to just printf, without the cout at the end, and still display the ascii value. And is it possible to make it simple(basic).Sorry if I didn't format properly.
#include <iostream>
#include <stdlib.h>
#include <stdio.h>
#include <string>
using namespace std;
int main()
{
int InitialNum, BaseNum, Num, x;
string FinalNum, Temp;
printf("Enter an unsigned integer of base ten: \n");//Prompt user for input
scanf_s("%d", &InitialNum);
printf("Enter the base you want to convert to (min2, max36): \n");
scanf_s("%d", &BaseNum);
x = InitialNum; //save the base 10 number to display at the end
while (InitialNum != 0) //continue dividing until original input is 0
{
Num = InitialNum % BaseNum; //save remainder to Num
int ascii = 48; //declare conversion variable (from int to char)
for (int i = 0; i < 32; ++i)//for loop converts Num from int 0-15 to char '0'-'9', 'A'-'F'
{
if(Num == i)
Temp = ascii;
ascii += 1;
if (ascii == 58)//skip from 9 to A on the ascii table and continue
ascii = 65;
}
FinalNum = Temp + FinalNum;//add to the final answer(additions to the left)
InitialNum /= BaseNum; //the initial base10 number gets divided by the base and saved as the quotient
}
printf("The number %d converted to base %d is:", x, BaseNum);
cout<<(FinalNum);
system("PAUSE");
return 0;
}
In order to output a std::string with printf you have to serve it to printf as a null-terminated string, the C style string. Or, well, you don't absolutely have to: you could print one character at a time. But it's most easy and practical to serve it as a null-terminated string.
You can do that via the .c_str() member function, hence:
printf( "%s\n", FinalString.c_str() );
Note that using std::string can be expensive at this low level of things, due to the dynamic allocation(s). E.g., (http://www.strudel.org.uk/itoa/), timing various implementations of itoa, found a 40x penalty.
I am making a little arduino binary calculator.
I have the code run some little math problem: ✓
I convert the answer from decimal to binary: ✓
I loop through the binary answer with a for loop and power on LEDs on a bread board to display the answer: ✗
//First led in pin 2
void setup()
{
Serial.begin(9600);
}
//I have the code run some little math problem:Check
int a=2;
int b=5;
int answer=b-a;
int myNum = answer;
void loop(){
//I convert the answer from decimal to binary:Check
int zeros = 8 - String(myNum,BIN).length();
String myStr;
for (int i=0; i<zeros; i++) {
myStr = myStr + "0";
}
myStr = myStr + String(myNum,BIN);
Serial.println(myStr);
//I loop through the binary answer with a for loop
//and power on LEDs on a bread board to display the answer:Not check
for(int i=2;i<=9;i=i+1){
//This part doesn't work
if(int(myStr[i-2])==1){
digitalWrite(int(i), HIGH);
}else{Serial.println(myStr[i-2]);}
}
while(true){}
}
for some reason it says int(myStr[i-2]) is never equal to 1.
Thanks in advance for the help.
The int() conversion is likely not doing what you think it does. It does not convert the value from a numerical string to a binary value. Instead you probably want to check to see if the values in the string are ascii digits.
if(myStr[i - 2] == '1')
// ^^^ single quotes to specify character value.
{
digitalWrite(int(i), HIGH);
}
else
{
Serial.println(myStr[i - 2]);
}
You should consider that in C a char is nothing more than an alias of an int so casting a char to int is a no-op. So the problem is that you are casting the character '1' or '0' to its int equivalent (its ascii code in fact). You should convert the char to a valid int (by subtracting 48 to a char in the range 48 - 57 you obtain the decimal conversion of the char) or simply checking it against a char value (so myStr[i-2] == '1')
As a part of a larger program, I must convert a string of numbers to an integer(eventually a float). Unfortunately I am not allowed to use casting, or atoi.
I thought a simple operation along the lines of this:
void power10combiner(string deciValue){
int result;
int MaxIndex=strlen(deciValue);
for(int i=0; MaxIndex>i;i++)
{
result+=(deciValue[i] * 10**(MaxIndex-i));
}
}
would work. How do I convert a char to a int? I suppose I could use ASCII conversions, but I wouldn't be able to add chars to ints anyways(assuming that the conversion method is to have an enormous if statement that returns the different numerical value behind each ASCII number).
There are plenty of ways to do this, and there are some optimization and corrections that can be done to your function.
1) You are not returning any value from your function, so the return type is now int.
2) You can optimize this function by passing a const reference.
Now for the examples.
Using std::stringstream to do the conversion.
int power10combiner(const string& deciValue)
{
int result;
std::stringstream ss;
ss << deciValue.c_str();
ss >> result;
return result;
}
Without using std::stringstream to do the conversion.
int power10combiner(const string& deciValue)
{
int result = 0;
for (int pos = 0; deciValue[pos] != '\0'; pos++)
result = result*10 + (deciValue[pos] - '0');
return result;
}
EDITED by suggestion, and added a bit of explanation.
int base = 1;
int len = strlen(deciValue);
int result = 0;
for (int i = (len-1); i >= 0; i--) { // Loop right to left. Is this off by one? Too tired to check.
result += (int(deciValue[i] - '0') * base); // '0' means "where 0 is" in the character set. We are doing the conversion int() because it will try to multiply it as a character value otherwise; we must cast it to int.
base *= 10; // This raises the base...it's exponential but simple and uses no outside means
}
This assumes the string is only numbers. Please comment if you need more clarification.
You can parse a string iteratively into an integer by simply implementing the place-value system, for any number base. Assuming your string is null-terminated and the number unsigned:
unsigned int parse(const char * s, unsigned int base)
{
unsigned int result = 0;
for ( ; *s; ++s)
{
result *= base;
result += *s - '0'; // see note
}
return result;
}
As written, this only works for number bases up to 10 using the numerals 0, ..., 9, which are guaranteed to be arranged in order in your execution character set. If you need larger number bases or more liberal sets of symbols, you need to replace *s - '0' in the indicated line by a suitable lookup mechanism that determines the digit value of your input character.
I would use std::stringstream, but nobody posted yet a solution using strtol, so here is one. Note, it doesn't perform handle out-of-range errors. On unix/linux you can use errno variable to detect such errors(by comparing it to ERANGE).
BTW, there are strtod/strtof/strtold functions for floating-point numbers.
#include <iostream>
#include <cstdlib>
#include <string>
int power10combiner(const std::string& deciValue){
const char* str = deciValue.c_str();
char* end; // the pointer to the first incorrect character if there is such
// strtol/strtoll accept the desired base as their third argument
long int res = strtol(str, &end, 10);
if (deciValue.empty() || *end != '\0') {
// handle error somehow, for example by throwing an exception
}
return res;
}
int main()
{
std::string s = "100";
std::cout << power10combiner(s) << std::endl;
}