I'm trying to decode ISO8583 message bitmap in a fast way. Supposing the first bitmap char is a 'B', I want to convert it to hex value 0xB (11 in decimal) to after that, check what bits are flagged. The last part I'm doing this way:
for(int i = 0; i < 4; i++){
std::cout << (value &(1 << i));
}
Because bitmaps and messages are long I'm trying to decoding fast. I saw some SO posts using std::hex and std::stringstream. But using this or lookup tables it isn't too much?
What I want is convert from 'B' to hex value 0xB, which is 11
So, you want to interpret B as hexadecimal digit and get numerical value, rather than "convert it to hexadecimal representation". Following works for any base (up to base 36) and with upper and lower case digits:
int value =
(c >= '0' && c <= '9') ? (c - '0') :
(c >= 'A' && c <= 'Z') ? (c - 'A' + 0xA) :
(c >= 'a' && c <= 'z') ? (c - 'a' + 0xA) :
-1; // error
You can substitute z with f if you want the error value when input is not a hexadecimal digit.
Old answer:
Characters are encoded as numbers. What numerical value represents which character is determined by the character encoding.
"Hex representation" of a number is a string of characters. Those characters represent the hexadecimal digits of the number individually. The numerical values of the characters in the hex representation are meaningless without the context of the character encoding.
Supposing the first bitmap char is a 'B', I want to convert it to hex representation (Transform 'B' into 0xB)
#include <string>
using namespace std::string_literals;
int main() {
char c = 'B';
std::string hex = "0x"s + c; // now hex is "0xB"
}
It would indeed be quite inefficient to first convert B to an integer, and then back to hex representation with std::stringstream if all you want is to add the prefix.
Create an array that acts as a map.
int hexNumbers[] = { 0xA, 0xB, 0xC, 0xD, 0xE, 0xF };
and then use it to as below:
if ( c >= 'A' && c <= 'F' )
{
number = hexNumbers[c-'A'];
}
else if ( c >= 'a' && c <= 'f' )
{
number = hexNumbers[c-'a'];
}
You can do it without a map also:
if ( c >= 'A' && c <= 'F' )
{
number = 0xA + (c-'A');
}
else if ( c >= 'a' && c <= 'f' )
{
number = 0xA + (c-'a');
}
Related
My char array is "00000f01" and I want it to be like byte out[4]={0x00,0x00,0x0f,0x01}; I tried the code sent by #ChrisA and thanks to him the Serial.println( b,HEX ); shows exactly what I need but 1st I can not access this output array because when I try to print "out" array it seems empty I also tried this code:
void setup() {
Serial.begin(9600);
char arr[] = "00000f01";
byte out[4];
byte arer[4];
auto getNum = [](char c){ return c ; };
byte *ptr = out;
for(char *idx = arr ; *idx ; ++idx, ++ptr ){
*ptr = (getNum( *idx++ ) << 4) + getNum( *idx );
}
int co=0;
//Check converted byte values.
for( byte b : out ){
Serial.println( co );
Serial.println( b,HEX );
arer[co]=b;
co++;
}
// Serial.print(getNum,HEX);
// Serial.print(out[4],HEX);
// Serial.print(arer[4],HEX);
/*
none of this codes commented above worked*/
}
void loop() {
}
but it is not working neither. please help me.
The title of your question leads me to believe there's something missing in either your understanding of char arrays or the way you've asked the question. Often people have difficulty understanding the difference between a hexadecimal character or digit, and the representation of a byte in memory. A quick explanation:
Internally, all memory is just binary. You can choose to represent (ie. display it) it in bits, ASCII, decimal or hexadecimal, but it doesn't change what is stored in memory. On the other hand, since memory is just binary, characters always require a character encoding. That can be unicode or other more exotic encodings, but typically it's just ASCII. So if you want a string of characters, whether they spell out a hexadecimal number or a sentence or random letters, they must be encoded in ASCII.
Now the body of the question can easily be addressed:
AFAIK, there's no way to "capture" the output of Serial.println( b,HEX ) pragmatically, so you need to find another way to do your conversion from hex characters. The getNum() lambda provides the perfect opportunity. At the moment it does nothing, but if you adjust it so the character '0' turns into the number 0, and the character 'f' turns in to the number 15, and so on, you'll be well on your way.
Here's a quick and dirty way to do that:
void setup() {
Serial.begin(9600);
char arr[] = "00000f01";
byte out[4];
byte arer[4];
auto getNum = [](char c){ return (c <= '9' ? c-'0' : c-'a'+10) ; };
byte *ptr = out;
for(char *idx = arr ; *idx ; ++idx, ++ptr ){
*ptr = (getNum( *idx++ ) << 4) + getNum( *idx );
}
int co=0;
//Check converted byte values.
for( byte b : out ){
Serial.println( co );
if(b < 0x10)
Serial.print('0');
Serial.println( b,HEX );
arer[co]=b;
co++;
}
}
void loop() {
}
All I've done is to modify getNum so it returns 0 for '0' and 15 for 'f', and so on in between. It does so by subtracting the value of the character '0' from the characters '0' through '9', or subtracting the value of the character 'a' from the characters 'a' through 'f'. Fortunately, the value of the characters '0' through '9' go up by one at a time, as do the characters from 'a' to 'f'. Note this will fall over if you input 'F' or something, but it'll do for the example you show.
When I run the above code on a Uno, I get this output:
0
00
1
00
2
0F
3
01
which seems to be what you want.
Epilogue
To demonstrate how print functions in C++ can lead you astray as to the actual value of thing you're printing, consider the cout version:
If I compile and run the following code in C++14:
#include <iostream>
#include <iomanip>
#include <string>
typedef unsigned char byte;
int main()
{
char arr[] = "00000f01";
byte out[4];
byte arer[4];
auto getNum = [](char c){ return c ; };
byte *ptr = out;
for(char *idx = arr ; *idx ; ++idx, ++ptr ){
*ptr = (getNum( *idx++ ) << 4) + getNum( *idx );
}
int co=0;
//Check converted byte values.
for( byte b : out ){
std::cout << std::setfill('0') << std::setw(2) << std::hex << b;
arer[co]=b;
co++;
}
}
I get this output:
00000f01
appearing to show that the conversion from hex characters has occurred. But this is only because cout ignores std::hex and treats b as a char to be printed in ASCII. Because the string "00000f01" has '0' as the first char in each pair, which happens to have a hex value (0x30) with zero lower nybble value, the (getNum( *idx++ ) << 4) happens to do nothing. So b will contain the original second char in each pair, which when printed in ASCII looks like a hex string.
I'm not sure what you mean by "... with out changing to ASCII or any thing else" so maybe I'm misunderstanding your question.
Anyway, below is some simple code to convert the hex-string to an array of unsigned.
unsigned getVal(char c)
{
assert(
(c >= '0' && c <= '9') ||
(c >= 'a' && c <= 'f') ||
(c >= 'A' && c <= 'F'));
if (c - '0' < 10) return c - '0';
if (c - 'A' < 6) return 10 + c - 'A';
return 10 + c - 'a';
}
int main()
{
char arr[] = "c02B0f01";
unsigned out[4];
for (auto i = 0; i < 4; ++i)
{
out[i] = 16*getVal(arr[2*i]) + getVal(arr[2*i+1]);
}
for (auto o : out)
{
std::cout << o << std::endl;
}
}
Output:
192
43
15
1
If you change the printing to
for (auto o : out)
{
std::cout << "0x" << std::hex << o << std::dec << std::endl;
}
the output will be:
0xc0
0x2b
0xf
0x1
I am writing a C++ console application and I'm turning a into 1, b into 2 and so on. Thing is, it's outputting numbers like 48 and 52 - even though the array I'm basing it off only goes up to 26.
Here's the code:
void calculateOutput() {
while (input[checkedNum] != alphabet[checkedAlpha]) {
checkedAlpha++;
if (checkedAlpha > 27) {
checkedAlpha = 0;
}
}
if (input[checkedNum] == alphabet[checkedAlpha]) {
cout << numbers[checkedAlpha] << "-";
checkedAlpha = 0;
checkedNum++;
calculateOutput();
}
}
Here is my number and alphabet arrays:
char alphabet [27] = { 'a','b','c','d','e','f','g','h','i','j','k','l','m','o','p','q','r','s','t','u','v','w','x','y','z',' '};
int numbers [27] = { '1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','20','21','22','23','24','25','26','0' };
Its int array so it means that it will save ASCII values of characters.
If you would look carefully on ASCII table, you would find out that 48,49,50,... are ascii values of numbers 0,1,2,...
What you have to do is deduct value of first number in table -> '0' (48)
cout << numbers[checkedAlpha] - '0' << "-";
or better, save numbers as numbers not characters
int numbers [27] = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18',19,20,21,22,23,24,25,26,0 };
Btw. here is hint which make it easier for you
tolower(inputAlphabet[index]) - 'a' + 1 // For 'a' output is 1 and so on :-)
The algorithm to get the number (i9ndex) of the letters of the alphabet is quite simple. No need for tables, a simple subtraction does the trick.
int getLetterIndex(char c)
{
// returns -1 if c is not a letter
if ('a' <= c && c <= 'z')
return 1 + c - 'a';
if ('A' <= c && c <= 'Z')
return 1 + c - 'A';
return -1;
}
I am currently doing a caesar cipher program. It should encrypt for both lower and upper case.
e.g
If I typed in a, it will then shift the keys by 3 and the final output will become d.
Take a look at my codes
char c;
c = (((97-52)+3) % 26) + 52;
cout << c;
The letter 'a' has an ASCII code of 97.
So by right
1) ((97-52)+3) will give you 48
2) 48 % 26 will give you 8 since 48/26 will give you a remainder of 8.
3) 8 + 52 = 60(which will by right give you a value of '>' according to the ascii table)
but my output that I have got is J and I don't understand which am I getting the output of 'J' instead of '>'
My concepts might be wrong so I need help.
Let me link ASCII chart I use first: http://pl.wikipedia.org/wiki/ASCII
The website is polish, but table itself is in english.
I think it's plainly obvious that problem is the equatation you use:
(((letter-52)+3) % 26) + 52;
Actually first letter in ASCII is 65(hexadecimal 0x41 - follow with the chart provided).
Your idea with the modulo would be fine, if there were no chars between letter blocks in ASCII. But there are (again check up chart).
That is why you should manually check if the sign:
is a capital letter: if (letter >= 0x41 && letter <= 0x5a)
is a non-capital: if (letter >= 0x61 && letter <= 0x7a)
Usually when making Ceasar cipher, you should follow these:
Replace a capital letter with capital letter moved in the alphabet by a given number.
If the letter would be out of alphabet scope, continue iteration from the start of alphabet (X moved 5 to the right would give C).
Other chars stay the same
Now let's implement this (in code I'll use letter values of chars - to avoid mistakes):
#include <iostream>
#include <cstdlib>
using namespace std;
string Ceasar(string input, int offset)
{
string result = "";
for (int i = 0; i < input.length(); ++i)
{
// For capital letters
if (input[i] >= 'A' && input[i] <= 'Z')
{
result += (char) (input[i] - 'A' + offset) % ('Z' - 'A') + 'A';
continue;
}
// For non-capital
if (input[i] >= 'a' && input[i] <= 'z')
{
result += (char) (input[i] - 'a' + offset) % ('z' - 'a') + 'a';
continue;
}
// For others
result += input[i];
}
return result;
}
int main()
{
cout << Ceasar(string("This is EXamPLE teXt!?"), 8).c_str();
system("PAUSE");
}
#include <iostream>
using namespace std;
Int main() {
cout<<"Give me a letter" <<endl;
char letter;
cin>>letter;
cout<<letter;
(Int)letter;
letter+=2;
cout<<(char)letter;
(Int)letter;
letter-=25;
cout<<(char)letter;
return 0;
}
How would I manipulate the numbers in a way so that the numbers will always output a letter.
ie: if the letter z was chosen and adding 2 is a symbol how would I manipulate it in a way so that it will always stay between the numbers for capital numbers and uncapitalized numbers. Thanks. Please try to keep answers at a beginner level please I am new to this.
if(letter > 'z') {
//do stuff
}
if(letter < 'a' && letter > 'Z') {
//do stuff
}
if(letter < 'A') {
//do stuff
}
It just depends on how you want to handle the character when it goes into one of the three ranges on the ASCII chart in which the characters are not letters.
As a side note, you don't have to cast a char to an int to do math with it.
char myChar = 'a' + 2;
cout << myChar;
This will print: c
c has an ASCII value of 2 more than a.
The surest method is to use a table for each category, and do
your arithmetic on its index, modulo the size of the table.
Thus, for just lower case letters, you might do something like:
char
transcode( char original )
{
char results = original;
static std::string const lower( "abcdefghijklmnopqrstuvwxyz" );
auto pos = std::find( lower.begin(), lower.end(), results );
if ( pos != lower.end() ) {
int index = pos - lower.begin();
index = (index + 2) % lower.size();
results = lower[ index ];
}
return results;
}
This solution is general, and will work regardless of the sets
of letters you want to deal with. For digits (and for upper and
lower case, if you aren't too worried about portability), you
can take advantage of the fact that the code points are
contiguous, and do something like:
char
transcode( char original )
{
char results = original;
if ( results >= '0' && results <= '9' ) {
char tmp = results - '0'
tmp = (tmp + 2) % 10;
results = tmp + '0';
}
return results;
}
An alternative implementation would be to use something like:
results = results + 2;
if ( results > '9' ) {
results -= 10;
}
in the if above. These two solutions are mathematically
equivalent.
This is only guaranteed to work for digits, but will generally
work for upper or lower case if you limit yourself to the
original ASCII character set. (Be aware that most systems today
support extended character sets.)
You can test directly against ASCII chars by using 'x' notation. Further, you can test things together using && ("and" respectively"):
if ('a' <= letter && letter <= 'z') {
// Letter is between 'a' and 'z'
} else if ('A' <= letter && letter <= 'Z')) {
// Letter is between 'A' and 'Z'
} else {
// Error! Letter is not between 'a' and 'z' or 'A' and 'Z'
}
Or you can use the standard library function std::isalpha which handles this for you:
if (std::isalpha(letter)) {
// Letter is between 'a' and 'z' or 'A' and 'Z'
} else {
// Error! Letter is not between 'a' and 'z' or 'A' and 'Z'
}
I'm reading a file with a line of text. I'm reading the file and changing the characters based on a displacement given by the user. While it works for some characters, it doesn't for others beyond a certain point.
My file contains this text: "This is crazy".
When I run my code with a displacement of 20, this is what I get:
▒bc▒ c▒ w▒u▒▒
string Security::EncWordUsingRot(int rotNum, string word)
{
rotNum = rotNum%26;
string encWord = word;
for (int i = 0; i < word.size(); i++)
{
char c = word[i];
c = tolower(c);
if ((c < 'a') || (c > 'z'))
encWord[i] = c;
else
{
c = (c + rotNum);
if (c > 'z')
c = (c - 26);
}
encWord[i] = c;
}
return encWord;
}
*EDIT**
I changed the commented sections to correct my error. I changed unsigned char c = word[i] back to char c = word[i]. I also added another two lines of code that took care of the value of c being lower than 'a'. I did this because I noticed an issue when I wanted to essentially return the encrypted statement to its original form.
string Security::EncWordUsingRot(int rotNum, string word)
{
rotNum = rotNum%26;
string encWord = word;
for (int i = 0; i < word.size(); i++)
{
char c = word[i]; //removed unsigned
c = tolower(c);
if ((c < 'a') || (c > 'z'))
encWord[i] = c;
else
{
c = (c + rotNum);
if (c > 'z')
c = (c - 26);
if (c < 'a') //but I added this if statement if the value of c is less than 'a'
c = (c + 26);
}
encWord[i] = c;
}
return encWord;
}
Change:
char c = word[i];
To:
unsigned char c = word[i];
In C and C++ you should always pay attention to numeric overflow because the language assumption is that a programmer will never make such a mistake.
A char is a kind of integer and is quite often 8 bits and signed, giving it an acceptable range of -128...127. This means that when you store a value in a char variable you should never exceed those bounds.
char is also a "storage type" meaning that computations are never done using chars and for example
char a = 'z'; // numeric value is 122 in ASCII
int x = a + 20; // 122 + 20 = 142
x will actually get the value 142 because the computation did not "overflow" (all char values are first converted to integers in an expression)
However storing a value bigger that the allowable range in a variable is undefined behaviour and code like
char a = 'z'; // numeric value is 122 in ASCII
char x = a + 20; // 122 + 20 = 142 (too big, won't fit)
is not acceptable: the computation is fine but the result doesn't fit into x.
Storing a value outside the valid range for signed chars in a signed char variable is exactly what your code did and that's the reason for the strange observed behaviour.
A simple solution is to use an integer to store the intermediate results instead of a char.
A few more notes:
A few functions about chars are indeed handling integers because they must be able to handle the special value EOF in addition to all valid chars. For example fgetc returns an int and isspace accepts an int (they return/accept either the code of the char converted to unsigned or EOF).
char could be signed or not depending on the compiler/options; if unsigned and 8-bit wide the allowable range is 0...255
Most often when storing a value outside bounds into a variable you simply get a "wrapping" behavior, but this is not guaranteed and doesn't always happen. For example a compiler is allowed to optimize (char(x + 20) < char(y + 20)) to (x < y) because the assumption is that the programmer will never ever overflow with signed numeric values.