i want to convert between char and int in c++ but i've found a problem with char to int conversion.
I did a test with two numbers, 103 and 155. 103 give me "g" in char and 155 give me "ø" and both are correct, but the problem is when i try to back chars to int: "g" give me 103 (correct) but i don't know why "ø" give me -101.
I'm using a char variable to store numbers, and "prinf("%d", char[x]);" to show back the number.
I've this test script:
#include <cstdlib>
#include <cstdio>
using namespace std;
int numero = 26523;
char *inttochar(int number) {
char *salida;
for (int j=0;j<=5;j++) {
salida[5-j] = number % 256;
printf("%d = %d = %c = %c = %d\n", 5-j, number % 256, number % 256, salida[5-j], salida[5-j]);
number = number / 256;
}
return salida;
}
int main(int argc, char** argv) {
char *texto = inttochar(numero);
for (int j=0;j<=5;j++) {
printf("%d: ", j+1);
printf("%c = %d\n", texto[j], texto[j]);
}
return 0;
}
And result is:
5 = 155 = ø = ø = -101
4 = 103 = g = g = 103
3 = 0 = = = 0
2 = 0 = = = 0
1 = 0 = = = 0
0 = 0 = = = 0
1: = 0
2: = 0
3: = 0
4: = 0
5: g = 103
6: ø = -101
With this script i want to convert a number to a base 256 char. ¿What i'm doing wrong?.
Thanks!!, and i'm sorry for my english.
Characters are signed values between -128 and 127 (on most systems). You're converting something from outside that range; the conversion to a char truncates but the bit pattern is OK. Converting the other direction puts you back into the range, and 155 isn't part of it. The bit pattern for 155 is 0x9b; to convert to a signed value you invert the bits and add one, so it becomes -0x65 which is -101 in decimal.
You can fix it with an and that strips off the extended sign bits: salida[5-j] & 0xff.
Edit: As noted in the comments your salida variable is intended to be a string but you never allocate any storage for it.
char *salida = new char[6];
char appears to be a signed type on your platform. That means 155 won't fit in it. You're getting that 155 interpreted as a 2's complement signed number, which equals -101.
Related
Am working on a C++ app in Windows platform. There's a unsigned char pointer that get's bytes in decimal format.
unsigned char array[160];
This will have values like this,
array[0] = 0
array[1] = 0
array[2] = 176
array[3] = 52
array[4] = 0
array[5] = 0
array[6] = 223
array[7] = 78
array[8] = 0
array[9] = 0
array[10] = 123
array[11] = 39
array[12] = 0
array[13] = 0
array[14] = 172
array[15] = 51
.......
........
.........
and so forth...
I need to take each block of 4 bytes and then calculate its decimal value.
So for eg., for the 1st 4 bytes the combined hex value is B034. Now i need to convert this to decimal and divide by 1000.
As you see, for each 4 byte block the 1st 2 bytes are always 0. So i can ignore those and then take the last 2 bytes of that block. So from above example, it's 176 & 52.
There're many ways of doing this, but i want to do it via using bit wise operators.
Below is what i tried, but it's not working. Basically am ignoring the 1st 2 bytes of every 4 byte block.
int index = 0
for (int i = 0 ; i <= 160; i++) {
index++;
index++;
float Val = ((Array[index]<<8)+Array[index+1])/1000.0f;
index++;
}
Since you're processing the array four-by-four, I recommend that you increment i by 4 in the for loop. You can also avoid confusion after dropping the unnecessary index variable - you have i in the loop and can use it directly, no?
Another thing: Prefer bitwise OR over arithmetic addition when you're trying to "concatenate" numbers, although their outcome is identical.
for (int i = 0 ; i <= 160; i += 4) {
float val = ((array[i + 2] << 8) | array[i + 3]) / 1000.0f;
}
First of all, i <= 160 is one iteration too many.
Second, your incrementation is wrong; for index, you have
Iteration 1:
1, 2, 3
And you're combining 2 and 3 - this is correct.
Iteration 2:
4, 5, 6
And you're combining 5 and 6 - should be 6 and 7.
Iteration 3:
7, 8, 9
And you're combining 8 and 9 - should be 10 and 11.
You need to increment four times per iteration, not three.
But I think it's simpler to start looping at the first index you're interested in - 2 - and increment by 4 (the "stride") directly:
for (int i = 2; i < 160; i += 4) {
float Val = ((Array[i]<<8)+Array[i+1])/1000.0f;
}
This code outputs 12480.. Why? I expected it will print 124816. Could someone explain that to me?
int main()
{
char c = 48; // From ASCII one can find that char 48 represents 0.
int i , mask = 1;
for(i = 1; i <= 5; i++)
{
printf("%c", c|mask); // Here print the char formatted output
mask = mask << 1;
}
return 0;
}
You are printing one variable as char, you will never get 16 (which is two characters) out of that.
You have 48 = 110000, when you bitwise-or it with 1 you get 110001 = 49 which when translated from ASCII to a char would be equal to character 1.
The next time you get 110000 | 10 = 110010 = 50 which is 2.
This goes on until you reach 5th iteration when 110000 | 10000 = 110000 = 48 which is 0.
Given this implementation of atoi in C++
// A simple atoi() function
int myAtoi(char *str)
{
int res = 0; // Initialize result
// Iterate through all characters of input string and update result
for (int i = 0; str[i] != '\0'; ++i)
res = res*10 + str[i] - '0';
// return result.
return res;
}
// Driver program to test above function
int main()
{
char str[] = "89789";
int val = myAtoi(str);
printf ("%d ", val);
return 0;
}
How exactly does the line
res = res*10 + str[i] - '0';
Change a string of digits into int values? (I'm fairly rusty with C++ to be honest. )
The standard requires that the digits are consecutive in the character set. That means you can use:
str[i] - '0'
To translate the character's value into its equivalent numerical value.
The res * 10 part is to shuffle left the digits in the running total to make room for the new digit you're inserting.
For example, if you were to pass "123" to this function, res would be 1 after the first loop iteration, then 12, and finally 123.
Each step that line does two things:
Shifts all digit left by a place in decimal
Places the current digit at the ones place
The part str[i] - '0' takes the ASCII character of the corresponding digit which are sequentially "0123456789" and subtracts the code for '0' from the current character. This leaves a number in the range 0..9 as to which digit is in that place in the string.
So when looking at your example case the following would happen:
i = 0 → str[i] = '8': res = 0 * 10 + 8 = 8
i = 1 → str[i] = '9': res = 8 * 10 + 9 = 89
i = 2 → str[i] = '7': res = 89 * 10 + 7 = 897
i = 3 → str[i] = '8': res = 897 * 10 + 8 = 8978
i = 4 → str[i] = '9': res = 8978 * 10 + 9 = 89789
And there's your result.
The digits 0123456789are sequential in ASCII.
The char datatype (and literal chars like '0') are integral numbers. In this case, '0' is equivalent to 48. Subtracting this offset will give you the digit in numerical form.
Lets take an example:
str = "234";
to convert it into int, basic idea is to process each character of string like this:
res = 2*100 + 3*10 + 4
or
res = 0
step1: res = 0*10 + 2 = 0 + 2 = 2
step2: res = res*10 + 3 = 20 + 3 = 23
step3: res = res*10 + 4 = 230 + 4 = 234
now since each letter in "234" is actually a character, not int
and has ASCII value associated with it
ASCII of '2' = 50
ASCII of '3' = 51
ASCII of '4' = 52
ASCII of '0' = 48
refer: http://www.asciitable.com/
if i had done this:
res = 0;
res = res*10 + str[0] = 0 + 50 = 50
res = res*10 + str[1] = 500 + 51 = 551
res = res*10 + str[2] = 5510 + 52 = 5562
then i would have obtained 5562, which we don't want.
remember: on using characters in arithmetic expressions, their ASCII value is used up (automatic typecasting of char -> int). Hence we need to convert character '2'(50) to int 2, which we can accomplish like this:
'2' - '0' = 50 - 48 = 2
Lets solve it again with this correction:
res = 0
res = res*10 + (str[0] - '0') = 0 + (50 - 48) = 0 + 2 = 2
res = res*10 + (str[1] - '0') = 20 + (51 - 48) = 20 + 3 = 23
res = res*10 + (str[2] - '0') = 230 + (52 - 48) = 230 + 4 = 234
234 is the required answer
I realized converting a String into a hexarray now need to convert the new array into a new string,because the function Sha256.write needs a char, which would be the way?
char hexstring[] = "020000009ecb752aac3e6d2101163b36f3e6bd67d0c95be402918f2f00000000000000001036e4ee6f31bc9a690053320286d84fbfe4e5ee0594b4ab72339366b3ff1734270061536c89001900000000";
int i;
int n;
uint8_t bytearray[80];
Serial.println("Starting...");
char tmp[3];
tmp[2] = '\0';
int j = 0;
//GET ARRAY
for(i=0;i<strlen(hexstring);i+=2) {
tmp[0] = hexstring[i];
tmp[1] = hexstring[i+1];
bytearray[j] = strtol(tmp,0,16);
j+=1;
}
for(i=0;i<80;i+=1) {
Serial.println( bytearray[i]);
}
int _batchSize;
unsigned char hash[32];
SHA256_CTX ctx;
int idx;
Serial.println("SHA256...");
for(_batchSize = 100000; _batchSize > 0; _batchSize--){
bytearray[76] = nonce;
// Sha256.write(bytearray);
sha256_init(&ctx);
sha256_update(&ctx,bytearray,80);
sha256_final(&ctx,hash); //
sha256_init(&ctx);
sha256_update(&ctx,hash,32);
sha256_final(&ctx,hash); //are this corrent? i'm need in bytes too
// print_hash(hash);
int zeroBytes = 0;
for (int i = 31; i >= 28; i--, zeroBytes++)
if(hash[i] > 0)
break;
if(zeroBytes == 4){ // SOLUTION TRUE, NOW I'M NEED THIS IN STRING
printf("0x");
for (n = 0; n < 32; n++)
Serial.println(printf("%02x", hash[n])); //ERROR :(
}
//increase
if(++nonce == 4294967295)
nonce = 0;
}
}
}
output array on Serial port:
2
0
0
0
158
203
117
42
172
62
109
33
1
22
59
54
243
230
189
103
208
201
91
228
2
145
143
47
0
0
0
0
0
0
0
0
16
54
228
238
111
49
188
154
105
0
83
50
2
134
216
79
191
228
229
238
5
148
180
171
114
51
147
102
179
255
23
52
39
0
97
83
108
137
0
25
0
0
0
0
how to convert this to a hexstring char back?
UPDATED
this solutions works for me, thanks all!
void printHash(uint8_t* hash) {
int id;
for (id=0; id<32; id++) {
Serial.print("0123456789abcdef"[hash[id]>>4]);
Serial.print("0123456789abcdef"[hash[id]&0xf]);
}
Serial.println();
}
Skip to the section Addressing your code... at bottom for most relevant content
(this stuff up here is barely useful blither)
The purpose of your function:
Sha256.write((char *)bytearray);
I believe is to write more data to the running hash. (from this)
Therefore, I am not sure in the context of your question how to convert this to a hex-string char back? how this relates to the way you are using it.
Let me offer another approach for the sake of illustrating how you might go about returning the array of ints back into the form of a "hexadecimal string":
From Here
Here is a code fragment that will calculate the digest for the string "abc"
SHA256_CTX ctx;
u_int8_t results[SHA256_DIGEST_LENGTH];
char *buf;
int n;
buf = "abc";
n = strlen(buf);
SHA256_Init(&ctx);
SHA256_Update(&ctx, (u_int8_t *)buf, n);
SHA256_Final(results, &ctx);
/* Print the digest as one long hex value */
printf("0x");
for (n = 0; n < SHA256_DIGEST_LENGTH; n++)
printf("%02x", results[n]);
putchar('\n');
resulting in:
"0xba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad".
In this example The array I believe you want, is contained in u_int8_t results
There is not enough description in your post to be sure this will help, let me know in the comments, and I will try to address further questions.
Added after your edit:
Continuing from the example above, to put the array contents of results back into a string, you can do something like this:
char *newString;
newString = malloc(sizeof(char)*SHA256_DIGEST_LENGTH*2);
memset(newString, 0, sizeof(char)*SHA256_DIGEST_LENGTH*2);
strcat(newString, "0x");
for(i=0;i<SHA256_DIGEST_LENGTH;i++)
{
sprintf(newString, "%s%02x\n", newString, results[i]);
}
//use newString for stuff...
free(newString);
Addressing your code, and your question directly:
Your code block:
for(_batchSize = 100000; _batchSize > 0; _batchSize--){
bytearray[76] = _batchSize;
Sha256.write((char *)bytearray); //here are the error
}
is not necessary if all you want to do is to convert an array of int into a "hexadecimal string"
Your int array, defined as:
int bytearray[80];
Already contains all the necessary values at this point, as you illustrated with your latest edit. If you want to return this data to a "hexadecimal string" form, then this will do that for you: (replacing result with your bytearray)
char *newString;
newString = malloc(sizeof(char)*SHA256_DIGEST_LENGTH*2);//if these defines are not in your environment
memset(newString, 0, sizeof(char)*SHA256_DIGEST_LENGTH*2);//then replace them with appropriate value for length
strcat(newString, "0x");
for(i=0;i<sizeof(bytearray)/sizeof(bytearray[0]);i++)
{
sprintf(newString, "%s%02x\n", newString, bytearray[i]);
}
//use newString for stuff...
free(newString);
I'm writing a program that converts a binary string to decimal. I wanted to validate my output before I get really started on this method. I have the following code:
int get_val()
{
int sum =0;
for(int num_bits = size; num_bits>0; num_bits--)
{
printf("String sub %i is %i\n", num_bits, int(bin[num_bits]));
}
}
When I input a string of 16 zeros, I get the following output:
String sub 16 is 24
String sub 15 is 0
String sub 14 is 0
String sub 13 is 0
String sub 12 is 23
String sub 11 is 0
String sub 10 is 0
String sub 9 is 0
String sub 8 is 22
String sub 7 is 0
String sub 6 is 0
String sub 5 is 0
String sub 4 is 21
String sub 3 is 0
String sub 2 is 0
String sub 1 is 0
Why would I bet getting different values if I input all zeros?
EDIT: bin is "0000000000000000"
As long as the question isn't updated, perhaps this example code helps. It converts a binary string into an integer. I tried to keep as much of your code and variable names as possible.
#include <stdio.h>
#include <stdlib.h>
#include <string>
using namespace std;
int main() {
string bin = "000111010";
int size = bin.length();
int sum = 0;
for(int num_bits = 1; num_bits <= size; num_bits++) {
sum <<= 1;
sum += bin[num_bits - 1] - '0';
}
printf("Binary string %s converted to integer is: %i\n", bin.c_str(), sum);
}
As already said in the comments, the main trick here is to convert the ASCII characters '0' and '1' to the integers 0 and 1 which is done by subtracting the value of '0'. Also, I changed the traverse order of the string because this way, you can shift the integer after each bit and always set the value of the currently lowest bit.
Short answer, you wouldn't.
Long answer, there are a few issues with this. The first big issue is that if we assume bin is a standard array of characters of length "size", then your first print is invalid. The array index is off by 1. Consider the code example:
int size = 16;
char * bin = new char[size];
for(int i=0; i<size; i++)
{
bin[i] = 0;
}
for(int num_bits = size; num_bits>0; num_bits--)
{
printf("String sub %i is %i\n", num_bits, int(bin[num_bits]));
}
Which produces:
String sub 16 is -3
String sub 15 is 0
String sub 14 is 0
String sub 13 is 0
String sub 12 is 0
String sub 11 is 0
String sub 10 is 0
String sub 9 is 0
String sub 8 is 0
String sub 7 is 0
String sub 6 is 0
String sub 5 is 0
String sub 4 is 0
String sub 3 is 0
String sub 2 is 0
String sub 1 is 0
Judging by the actual output you got, I'm guessing you did something like:
int size=16;
int * ints = new int[size];
char * bin;
//Fill with numbers, not zeros, based on the evidence
for(int i=0; i<size; i++)
{
ints[i] = 20 + i;
}
//Copy over to character buffer
bin = (char*)(void*)&(ints[0]);
for(int num_bits = size; num_bits>0; num_bits--)
{
printf("String sub %i is %i\n", num_bits, int(bin[num_bits]));
}
That explains the output you saw perfectly. So, I'm thinking your input assumption, that bin points to an array of character zeros, is not true. There are a few really big problems with this, assuming you did something like that.
Your assumption that the memory is all zero is wrong, and you need to explain that or post the real code and we will
You can't just treat a memory buffer of integers as characters - a string is made up of one byte characters (typically), integers are 4 bytes, typically
Arrays in C++ start at 0, not 1
Casting a character to an integer [ int('0') ] does not intelligently convert - the integer that comes out of that is a decimal 48, not a decimal 0 (there is a function atoi that will do that, as well as other better ones, or the other suggestion to use subtraction)