I'm writing a program that converts a binary string to decimal. I wanted to validate my output before I get really started on this method. I have the following code:
int get_val()
{
int sum =0;
for(int num_bits = size; num_bits>0; num_bits--)
{
printf("String sub %i is %i\n", num_bits, int(bin[num_bits]));
}
}
When I input a string of 16 zeros, I get the following output:
String sub 16 is 24
String sub 15 is 0
String sub 14 is 0
String sub 13 is 0
String sub 12 is 23
String sub 11 is 0
String sub 10 is 0
String sub 9 is 0
String sub 8 is 22
String sub 7 is 0
String sub 6 is 0
String sub 5 is 0
String sub 4 is 21
String sub 3 is 0
String sub 2 is 0
String sub 1 is 0
Why would I bet getting different values if I input all zeros?
EDIT: bin is "0000000000000000"
As long as the question isn't updated, perhaps this example code helps. It converts a binary string into an integer. I tried to keep as much of your code and variable names as possible.
#include <stdio.h>
#include <stdlib.h>
#include <string>
using namespace std;
int main() {
string bin = "000111010";
int size = bin.length();
int sum = 0;
for(int num_bits = 1; num_bits <= size; num_bits++) {
sum <<= 1;
sum += bin[num_bits - 1] - '0';
}
printf("Binary string %s converted to integer is: %i\n", bin.c_str(), sum);
}
As already said in the comments, the main trick here is to convert the ASCII characters '0' and '1' to the integers 0 and 1 which is done by subtracting the value of '0'. Also, I changed the traverse order of the string because this way, you can shift the integer after each bit and always set the value of the currently lowest bit.
Short answer, you wouldn't.
Long answer, there are a few issues with this. The first big issue is that if we assume bin is a standard array of characters of length "size", then your first print is invalid. The array index is off by 1. Consider the code example:
int size = 16;
char * bin = new char[size];
for(int i=0; i<size; i++)
{
bin[i] = 0;
}
for(int num_bits = size; num_bits>0; num_bits--)
{
printf("String sub %i is %i\n", num_bits, int(bin[num_bits]));
}
Which produces:
String sub 16 is -3
String sub 15 is 0
String sub 14 is 0
String sub 13 is 0
String sub 12 is 0
String sub 11 is 0
String sub 10 is 0
String sub 9 is 0
String sub 8 is 0
String sub 7 is 0
String sub 6 is 0
String sub 5 is 0
String sub 4 is 0
String sub 3 is 0
String sub 2 is 0
String sub 1 is 0
Judging by the actual output you got, I'm guessing you did something like:
int size=16;
int * ints = new int[size];
char * bin;
//Fill with numbers, not zeros, based on the evidence
for(int i=0; i<size; i++)
{
ints[i] = 20 + i;
}
//Copy over to character buffer
bin = (char*)(void*)&(ints[0]);
for(int num_bits = size; num_bits>0; num_bits--)
{
printf("String sub %i is %i\n", num_bits, int(bin[num_bits]));
}
That explains the output you saw perfectly. So, I'm thinking your input assumption, that bin points to an array of character zeros, is not true. There are a few really big problems with this, assuming you did something like that.
Your assumption that the memory is all zero is wrong, and you need to explain that or post the real code and we will
You can't just treat a memory buffer of integers as characters - a string is made up of one byte characters (typically), integers are 4 bytes, typically
Arrays in C++ start at 0, not 1
Casting a character to an integer [ int('0') ] does not intelligently convert - the integer that comes out of that is a decimal 48, not a decimal 0 (there is a function atoi that will do that, as well as other better ones, or the other suggestion to use subtraction)
Related
i am new at c++, i am trying to make a program that can input a matriks from ftitikberat.txt with this format:
id[...] -> matriks id
row[id[...]] -> number of matriks rows
coloumns[id[...]] -> number of matriks coloumns
matriks[id[...]] [row[id[...]]] [col[id[...]]] -> matriks
name[id[...]] -> matriks name
the program can be compiled, but when i try to input ftitikberat.txt its always crash
here's the code:
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
int row[1000];
int col[1000];
int matriks[1000][4][4];
int id[1000];
int i,j,k;
string name[1000];
ifstream ifile("ftitikberat.txt");
for(i=1; i<=1000; i++)
{
ifile>>id[i]>>name[i]>>row[i]>>col[i];
for(j=1; j<=row[id[i]]; j++)
{
for(k=1; k<=col[id[i]]; k++)
{
ifile>>matriks[id[i]][j][k];
}
}
}
ifile.close();
and for the text:
1 null 1 1 0
2 null 1 1 0
3 null 1 1 0
4 null 1 1 0
.
. //until
.
998 null 1 1 0
999 null 1 1 0
1000 null 1 1 0
i have tried to change the text to:
...
998 null 1 1 0 1
...
and when i try to compile and run it, the program work just fine, except i cant use id 999 and 1000 because it just messed up, same when i tried to change the text at id 997 (997 null 1 1 0 1) and program didn't crash but i cant use id 998,999,1000
i have also tried to change maximal array one by one, and the program didn't crash when i change maximal array of (id and name) from 1000 to 1001, but i dont know why its work
can somebody please explain me why the program (before i changing text/maximal array) didn't work? i staring at this simple program like hours, but still don't know where the problem is :')
Array indexes go from 0 to size-1, not from 1 to size. So you're accessing outside the arrays when you do:
for(i=1; i<=1000; i++)
it should be:
for (i = 0; i < 1000; i++)
Your other loops should also start from 0 and use < instead of <=.
This code outputs 12480.. Why? I expected it will print 124816. Could someone explain that to me?
int main()
{
char c = 48; // From ASCII one can find that char 48 represents 0.
int i , mask = 1;
for(i = 1; i <= 5; i++)
{
printf("%c", c|mask); // Here print the char formatted output
mask = mask << 1;
}
return 0;
}
You are printing one variable as char, you will never get 16 (which is two characters) out of that.
You have 48 = 110000, when you bitwise-or it with 1 you get 110001 = 49 which when translated from ASCII to a char would be equal to character 1.
The next time you get 110000 | 10 = 110010 = 50 which is 2.
This goes on until you reach 5th iteration when 110000 | 10000 = 110000 = 48 which is 0.
I have a bitmap stocked as a unsigned char array, containing only 1 and 0; like this:
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
I wish to stock this in a compact way, so I wrote a function to convert this into hexadecimals. I'll stock this like:
0f33
My question now: with which function can I convert these characters back into my bitmap? when I have a pointer to the character "f", how can I convert that into the integer value 15? (I know a switch case would do the trick, but probably there is a better way?)
Try this for C++:
int number;
std::stringsteam ss;
ss << std::hex << *characterPointer;
ss >> number;
For C:
char hexstr[2] = {*characterPointer, '\0'};
int number = (int)strtol(hexstr, NULL, 16);
i want to convert between char and int in c++ but i've found a problem with char to int conversion.
I did a test with two numbers, 103 and 155. 103 give me "g" in char and 155 give me "ø" and both are correct, but the problem is when i try to back chars to int: "g" give me 103 (correct) but i don't know why "ø" give me -101.
I'm using a char variable to store numbers, and "prinf("%d", char[x]);" to show back the number.
I've this test script:
#include <cstdlib>
#include <cstdio>
using namespace std;
int numero = 26523;
char *inttochar(int number) {
char *salida;
for (int j=0;j<=5;j++) {
salida[5-j] = number % 256;
printf("%d = %d = %c = %c = %d\n", 5-j, number % 256, number % 256, salida[5-j], salida[5-j]);
number = number / 256;
}
return salida;
}
int main(int argc, char** argv) {
char *texto = inttochar(numero);
for (int j=0;j<=5;j++) {
printf("%d: ", j+1);
printf("%c = %d\n", texto[j], texto[j]);
}
return 0;
}
And result is:
5 = 155 = ø = ø = -101
4 = 103 = g = g = 103
3 = 0 = = = 0
2 = 0 = = = 0
1 = 0 = = = 0
0 = 0 = = = 0
1: = 0
2: = 0
3: = 0
4: = 0
5: g = 103
6: ø = -101
With this script i want to convert a number to a base 256 char. ¿What i'm doing wrong?.
Thanks!!, and i'm sorry for my english.
Characters are signed values between -128 and 127 (on most systems). You're converting something from outside that range; the conversion to a char truncates but the bit pattern is OK. Converting the other direction puts you back into the range, and 155 isn't part of it. The bit pattern for 155 is 0x9b; to convert to a signed value you invert the bits and add one, so it becomes -0x65 which is -101 in decimal.
You can fix it with an and that strips off the extended sign bits: salida[5-j] & 0xff.
Edit: As noted in the comments your salida variable is intended to be a string but you never allocate any storage for it.
char *salida = new char[6];
char appears to be a signed type on your platform. That means 155 won't fit in it. You're getting that 155 interpreted as a 2's complement signed number, which equals -101.
How does this code concatenate the data from the string buffer? What is the * 10 doing? I know that by subtracting '0' you are subtracting the ASCII so turn into an integer.
char *buf; // assume that buf is assigned a value such as 01234567891234567
long key_num = 0;
someFunction(&key_num);
...
void someFunction(long *key_num) {
for (int i = 0; i < 18; i++)
*key_num = *key_num * 10 + (buf[i] - '0')
}
(Copied from my memory of code that I am working on recently)
It's basically an atoi-type (or atol-type) function for creating an integral value from a string. Consider the string "123".
Before starting, key_num is set to zero.
On the first iteration, that's multiplied by 10 to give you 0, then it has the character value '1' added and '0' subtracted, effectively adding 1 to give 1.
On the second iteration, that's multiplied by 10 to give you 10, then it has the character value '2' added and '0' subtracted, effectively adding 2 to give 12.
On the third iteration, that's multiplied by 10 to give you 120, then it has the character value '3' added and '0' subtracted, effectively adding 3 to give 123.
Voila! There you have it, 123.
If you change the code to look like:
#include <iostream>
char buf[] = "012345678901234567";
void someFunction(long long *key_num) {
std::cout << *key_num << std::endl;
for (int i = 0; i < 18; i++) {
*key_num = *key_num * 10 + (buf[i] - '0');
std::cout << *key_num << std::endl;
}
}
int main (void) {
long long x = 0;
someFunction (&x);
return 0;
}
then you should see it in action (I had to change your value from the 17-character array you provided in your comment to an 18-character one, otherwise you'd get some problems when you tried to use the character beyond the end; I also had to change to a long long because my longs weren't big enough):
0
0
1
12
123
1234
12345
123456
1234567
12345678
123456789
1234567890
12345678901
123456789012
1234567890123
12345678901234
123456789012345
1234567890123456
12345678901234567
As a shorter example with the number 1234, that can be thought of as:
1000 * 1 + 100 * 2 + 10 * 3 + 4
Or:
10 * (10 * (10 * 1 + 2) + 3) + 4
The first time through the loop, *key_num would be 1. The second time it is multiplied by 10 and 2 added (ie 12), the third time multiplied by 10 and 3 added (ie 123), the fourth time multiplied by 10 and 4 added (ie 1234).
It just multiples the current long value (*key_num) by 10, adds the digit value, then stores the result again.
EDIT: It's not bit-shifting anything. It's just math. You can imagine it as shifting decimal digits, but it's binary internally.
key_num = 0 (0)
key_num = key_num * 10 + ('0' - '0') (0)
key_num = key_num * 10 + ('1' - '0') (1)
key_num = key_num * 10 + ('2' - '0') (12)