converting string of ascii character decimal values to binary values - c++

I need help writing a program that converts full sentences to binary code (ascii -> decimal ->binary), and vice-versa, but I am having trouble doing it. Right now I am working on ascii->binary.
ascii characters have decimal values. a = 97, b = 98, etc. I want to get the decimal value of an ascii character and convert it to a dinary or binary decimal, like 10 (in decimal) in binary is simply:
10 (decimal) == 1010 (binary)
So the ascii decimal value of a and b is:
97, 98
This in binary is (plus the space character which is 32, thanks):
11000011000001100010 == "a b"
11000011100010 == "ab"
I have written this:
int c_to_b(char c)
{
return (printf("%d", (c ^= 64 ^= 32 ^= 16 ^= 8 ^= 4 ^= 2 ^= 1 ^= 0));
}
int s_to_b(char *s)
{
long bin_buf = 0;
for (int i = 0; s[i] != '\0'; i++)
{
bin_buf += s[i] ^= 64 ^= 32 ^= 16 ^= 8 ^= 4 ^= 2 ^= 1 ^= 0;
}
return printf("%d", bin_buf);
}
code examples
main.c
int main(void)
{
// this should print out each binary value for each character in this string
// eg: h = 104, e = 101
// print decimal to binary 104 and 101 which would be equivalent to:
// 11010001100101
// s_to_b returns printf so it should print automatically
s_to_b("hello, world!");
return 0;
}
To elaborate, the for loop in the second snippet loops through each character in the character array until it hits the null terminator. Each time it counts a character, it gets does that operation. Am I using the right operation?

Maybe you want something like
void s_to_b(const char*s)
{
if (s != NULL) {
while (*s) {
int c = *s;
printf(" %d", c);
s++;
}
putc('\n');
}
}

Related

Store HEX char array in byte array with out changing to ASCII or any thing else

My char array is "00000f01" and I want it to be like byte out[4]={0x00,0x00,0x0f,0x01}; I tried the code sent by #ChrisA and thanks to him the Serial.println( b,HEX ); shows exactly what I need but 1st I can not access this output array because when I try to print "out" array it seems empty I also tried this code:
void setup() {
Serial.begin(9600);
char arr[] = "00000f01";
byte out[4];
byte arer[4];
auto getNum = [](char c){ return c ; };
byte *ptr = out;
for(char *idx = arr ; *idx ; ++idx, ++ptr ){
*ptr = (getNum( *idx++ ) << 4) + getNum( *idx );
}
int co=0;
//Check converted byte values.
for( byte b : out ){
Serial.println( co );
Serial.println( b,HEX );
arer[co]=b;
co++;
}
// Serial.print(getNum,HEX);
// Serial.print(out[4],HEX);
// Serial.print(arer[4],HEX);
/*
none of this codes commented above worked*/
}
void loop() {
}
but it is not working neither. please help me.
The title of your question leads me to believe there's something missing in either your understanding of char arrays or the way you've asked the question. Often people have difficulty understanding the difference between a hexadecimal character or digit, and the representation of a byte in memory. A quick explanation:
Internally, all memory is just binary. You can choose to represent (ie. display it) it in bits, ASCII, decimal or hexadecimal, but it doesn't change what is stored in memory. On the other hand, since memory is just binary, characters always require a character encoding. That can be unicode or other more exotic encodings, but typically it's just ASCII. So if you want a string of characters, whether they spell out a hexadecimal number or a sentence or random letters, they must be encoded in ASCII.
Now the body of the question can easily be addressed:
AFAIK, there's no way to "capture" the output of Serial.println( b,HEX ) pragmatically, so you need to find another way to do your conversion from hex characters. The getNum() lambda provides the perfect opportunity. At the moment it does nothing, but if you adjust it so the character '0' turns into the number 0, and the character 'f' turns in to the number 15, and so on, you'll be well on your way.
Here's a quick and dirty way to do that:
void setup() {
Serial.begin(9600);
char arr[] = "00000f01";
byte out[4];
byte arer[4];
auto getNum = [](char c){ return (c <= '9' ? c-'0' : c-'a'+10) ; };
byte *ptr = out;
for(char *idx = arr ; *idx ; ++idx, ++ptr ){
*ptr = (getNum( *idx++ ) << 4) + getNum( *idx );
}
int co=0;
//Check converted byte values.
for( byte b : out ){
Serial.println( co );
if(b < 0x10)
Serial.print('0');
Serial.println( b,HEX );
arer[co]=b;
co++;
}
}
void loop() {
}
All I've done is to modify getNum so it returns 0 for '0' and 15 for 'f', and so on in between. It does so by subtracting the value of the character '0' from the characters '0' through '9', or subtracting the value of the character 'a' from the characters 'a' through 'f'. Fortunately, the value of the characters '0' through '9' go up by one at a time, as do the characters from 'a' to 'f'. Note this will fall over if you input 'F' or something, but it'll do for the example you show.
When I run the above code on a Uno, I get this output:
0
00
1
00
2
0F
3
01
which seems to be what you want.
Epilogue
To demonstrate how print functions in C++ can lead you astray as to the actual value of thing you're printing, consider the cout version:
If I compile and run the following code in C++14:
#include <iostream>
#include <iomanip>
#include <string>
typedef unsigned char byte;
int main()
{
char arr[] = "00000f01";
byte out[4];
byte arer[4];
auto getNum = [](char c){ return c ; };
byte *ptr = out;
for(char *idx = arr ; *idx ; ++idx, ++ptr ){
*ptr = (getNum( *idx++ ) << 4) + getNum( *idx );
}
int co=0;
//Check converted byte values.
for( byte b : out ){
std::cout << std::setfill('0') << std::setw(2) << std::hex << b;
arer[co]=b;
co++;
}
}
I get this output:
00000f01
appearing to show that the conversion from hex characters has occurred. But this is only because cout ignores std::hex and treats b as a char to be printed in ASCII. Because the string "00000f01" has '0' as the first char in each pair, which happens to have a hex value (0x30) with zero lower nybble value, the (getNum( *idx++ ) << 4) happens to do nothing. So b will contain the original second char in each pair, which when printed in ASCII looks like a hex string.
I'm not sure what you mean by "... with out changing to ASCII or any thing else" so maybe I'm misunderstanding your question.
Anyway, below is some simple code to convert the hex-string to an array of unsigned.
unsigned getVal(char c)
{
assert(
(c >= '0' && c <= '9') ||
(c >= 'a' && c <= 'f') ||
(c >= 'A' && c <= 'F'));
if (c - '0' < 10) return c - '0';
if (c - 'A' < 6) return 10 + c - 'A';
return 10 + c - 'a';
}
int main()
{
char arr[] = "c02B0f01";
unsigned out[4];
for (auto i = 0; i < 4; ++i)
{
out[i] = 16*getVal(arr[2*i]) + getVal(arr[2*i+1]);
}
for (auto o : out)
{
std::cout << o << std::endl;
}
}
Output:
192
43
15
1
If you change the printing to
for (auto o : out)
{
std::cout << "0x" << std::hex << o << std::dec << std::endl;
}
the output will be:
0xc0
0x2b
0xf
0x1

Project Euler 8 in c++

I'm trying to solve problem 8 from project euler but I'm getting way too big numbers as results and I don't know why.
The problem is "Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?"
My code :
#include <iostream>
#include <string>
int main()
{
std::string str = "7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450";
long long a = 1;
long long fin = 0;
for (int c = 0; c < 988; c++)
{
for (int d = 0; d < 13; d++)
{
a = a * str.at(c + d);
}
if (a > fin)
{
fin = a;
std::cout << fin << " at " << c << std::endl;
}
a = 1;
}
system("pause");
}
The output :
7948587103611909356 at 0
8818137127266647872 at 15
8977826317031653376 at 71
9191378290313403392 at 214
9205903071867879424 at 573
Press any key to continue...
The problem is that the characters '0' through '9' are not the same as the integers 0 through 9; rather, '0' has the value 48, '1' has the value 49, and so on. (These are the ASCII values of those characters.)
So to convert from a digit character to the desired number — for example, to extract e.g. 3 from '3' — you need to subtract '0'. In other words, you need to change this:
a = a * str.at(c + d);
to this:
a = a * (str.at(c + d) - '0');

need help for write a function that give me all the states of a number

I have a 192-bit number . and I want two write a function that give me all of the states of this number as follows :
1) all the states with one bit 1
2) all the states with two bits 1
3) all the states with three bits 1
.
.
.
and so on till all of the bits will be 1
also I want to write each of this part in a separate files.
I'v just wrote the states that all of the 1-bits are put together.
for example:(for 16-bits number)
0000000000000011----> then I shift the bits to the left. But I can't find a good way to give me all of states of two bits.
(I use miracle library in C for this big number)
do you have any idea?
thank you :)
You could use 6 for-loops (192/32bit) which loop across all the values of a uint32
inside every-for-loop you can multiply the uint32 by some value to get the right value something like this:
for(uint32_t i = 0; i < 0xFFFFFFFF; i++) {
for(uint32_t j = 0; j < 0xFFFFFFFF; j++) {
bignumber = j + 0xFFFFFFFF*i
print(bignumber)
}
}
or if you want to do it really bitwise you could do some bitmasking inside the for-loops
I do not know your functions. but, if you have num and shiftLeft and equals functions, it can be like this
for (int i=0;i<192;i+=2)
{
num->assing(0b11);
num->shiftLeft(i*2);
if (num->andOperand(victim)->equals(num))
{
//this is the number has two consecutive 11, and only
}
if (num->andOperand(victim)->biggerAndEqual(0b11))
{
//this is the number has at least one , two consecutive 11
}
}
As the problem was stated there are ((2 ^ 192) - 1) numbers to print, because all permutations are covered except 0 which contains no 1 bits. That is clearly impossible so the question must be asking for consecutive bits set. As #n.m. wrote, get it working with 4 bits first. Then extend it to 192 bits. To shift a number, you double it. This solution works without doing any bit shifting or multiplication - by addition only (apart from the bit mask in printbits().
#include<stdio.h>
#define BITS 4
unsigned printmask;
void printbits (unsigned num) {
int i;
for (i=0; i<BITS; i++) {
if (num & printmask)
printf ("1");
else
printf ("0");
num = num + num;
}
printf (" ");
}
int main() {
unsigned num, bits;
int m, n;
printmask = 1; // prepare bit mask for printing
for (n=1; n<BITS; n++)
printmask = printmask + printmask;
num = 1;
for (n=0; n<BITS; n++) {
bits = num;
for (m=n; m<BITS; m++) {
printbits (bits);
bits = bits + bits;
}
printf ("\n");
num = num + num + 1;
}
return 0;
}
Program output
0001 0010 0100 1000
0011 0110 1100
0111 1110
1111

Decimal to Binary, strange output

I wrote program to convert decimal to binary for practice purposes but i get some strange output. When doing modulo with decimal number, i get correct value but what goes in array is forward slash? I am using char array for being able to just use output with cout <<.
// web binary converter: http://mistupid.com/computers/binaryconv.htm
#include <iostream>
#include <math.h>
#include <malloc.h> // _msize
#include <climits>
#define WRITEL(x) cout << x << endl;
#define WRITE(x) cout << x;
using std::cout;
using std::endl;
using std::cin;
char * decimalToBinary(int decimal);
void decimal_to_binary_char_array();
static char * array_main;
char * decimalToBinary(int decimal) // tied to array_main
{
WRITEL("Number to convert: " << decimal << "\n");
char * binary_array;
int t = decimal, // for number of digits
digits = 0, // number of digits
bit_count = 0; // total digit number of binary number
static unsigned int array_size = 0;
if(decimal < 0) { t = decimal; t = -t; } // if number is negative, make it positive
while(t > 0) { t /= 10; digits++; } // determine number of digits
array_size = (digits * sizeof(int) * 3); // number of bytes to allocate to array_main
WRITEL("array_size in bytes: " << array_size);
array_main = new char[array_size];
int i = 0; // counter for number of binary digits
while(decimal > 0)
{
array_main[i] = (char) decimal % 2 + '0';
WRITE("decimal % 2 = " << char (decimal % 2 + '0') << " ");
WRITE(array_main[i] << " ");
decimal = decimal / 2;
WRITEL(decimal);
i++;
}
bit_count = i;
array_size = bit_count * sizeof(int) + 1;
binary_array = new char[bit_count * sizeof(int)];
for(int i=0; i<bit_count+1; i++)
binary_array[i] = array_main[bit_count-1-i];
//array_main[bit_count * sizeof(int)] = '\0';
//WRITEL("\nwhole binary_array: "); for(int i=0; i<array_size; i++) WRITE(binary_array[i]); WRITEL("\n");
delete [] array_main;
return binary_array;
}
int main(void)
{
int num1 = 3001;
// 3001 = 101110111001
// 300 = 100101100
// 1000 = 1111101000
// 1200 = 10010110000
// 1000000 = 11110100001001000000
// 200000 = 110000110101000000
array_main = decimalToBinary(num1);
WRITEL("\nMAIN: " << array_main);
cin.get();
delete [] array_main;
return 0;
}
The output:
Number to convert: 3001
array_size in bytes: 48
decimal % 2 = 1 / 1500
decimal % 2 = 0 0 750
decimal % 2 = 0 0 375
decimal % 2 = 1 1 187
decimal % 2 = 1 / 93
decimal % 2 = 1 1 46
decimal % 2 = 0 0 23
decimal % 2 = 1 1 11
decimal % 2 = 1 1 5
decimal % 2 = 1 1 2
decimal % 2 = 0 1 1
decimal % 2 = 1 1 0
MAIN: 1111101/100/
What are those forward slashes in output (1111101/100/)?
Your problem is here:
array_main[i] = (char) decimal % 2 + '0';
You are casting decimal to char and it is swiping off the high-order bits, so that in some cases it becomes negative. % applied to a negative number is negative, hence you get one character before 0 in the ASCII chart, which is /.
I would also like to say that I think your macros WRITEL and WRITE qualify as preprocessor abuse. :-)
It must be array_main[i] = (char) (decimal % 2 + '0'); (note the parentheses). But anyway, the code is horrible, please write it again from scratch.
I haven't tried to analyze all your code in detail, but just glancing at it and seeing delete [] array_main; in two places makes me suspicious. The length of the code makes me suspicious as well. Converting a number to binary should take about two or three lines of code; when I see code something like ten times that long, I tend to think that analyzing it in detail isn't worth the trouble -- if I had to do it, I'd just start over...
Edit: as to how to do the job better, my immediate reaction would be to start with something on this general order:
// warning: untested code.
std::string dec2str(unsigned input) {
std::deque<char> buffer;
while (input) {
buffer.push_front((input & 1)+'0');
input >>= 1;
}
return std::string(&buffer[0], &buffer[0]+buffer.size());
}
While I haven't tested this, it's simple enough that I'd be surprised if there were any errors above the level of simple typos (and it's short enough that there doesn't seem to be room to hide more than one or two of those, at very most).

converting 128 bits from a character array to decimal without external libraries in C++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I had to convert a the 128 bits of a character array which has size 16 (1 byte each character), into a decimal and hexadecimal, without using any other libraries than included. Converting it to hexadecimal was easy as four bits were processed each time an the result was printed for each four bits as soon as it was generated.
But when it comes to decimal. Converting it in the normal mathematical way was not possible, in which each bit is multiplied by 2 to the power the index of the bit from left.
So I thought to convert it like I did with hexadecimal by printing digit by digit. But the problem is that in decimal it is not possible as the maximum digit is 9 and it needs 4 bits to represented while 4 bits can represent decimal numbers up to 15. I tried making some mechanism to carry the additional part, but couldn't find a way to do so. And I think, that was not going to work either. I have been trying aimlessly for three days as I have no idea what to do. And couldn't even find any helpful solution on the internet.
So, I want some way to get this done.
Here is My Complete Code:
#include <iostream>
#include <cstring>
#include <cmath>
using namespace std;
const int strng = 128;
const int byts = 16;
class BariBitKari {
char bits_ar[byts];
public:
BariBitKari(char inp[strng]) {
set_bits_ar(inp);
}
void set_bits_ar(char in_ar[strng]) {
char b_ar[byts];
cout << "Binary 1: ";
for (int i=0, j=0; i<byts; i++) {
for (int k=7; k>=0; k--) {
if (in_ar[j] == '1') {
cout << '1';
b_ar[i] |= 1UL << k;
}
else if (in_ar[j] == '0') {
cout << '0';
b_ar[i] &= ~(1UL << k);
}
j++;
}
}
cout << endl;
strcpy(bits_ar, b_ar);
}
char * get_bits_ar() {
return bits_ar;
}
// Functions
void print_deci() {
char b_ar[byts];
strcpy(b_ar, get_bits_ar());
int sum = 0;
int carry = 0;
cout << "Decimal : ";
for (int i=byts-1; i >= 0; i--){
for (int j=4; j>=0; j-=4) {
char y = (b_ar[i] << j) >> 4;
// sum = 0;
for (int k=0; k <= 3; k++) {
if ((y >> k) & 1) {
sum += pow(2, k);
}
}
// sum += carry;
// if (sum > 9) {
// carry = 1;
// sum -= 10;
// }
// else {
// carry = 0;
// }
// cout << sum;
}
}
cout << endl;
}
void print_hexa() {
char b_ar[byts];
strcpy(b_ar, get_bits_ar());
char hexed;
int sum;
cout << "Hexadecimal : 0x";
for (int i=0; i < byts; i++){
for (int j=0; j<=4; j+=4) {
char y = (b_ar[i] << j) >> 4;
sum = 0;
for (int k=3; k >= 0; k--) {
if ((y >> k) & 1) {
sum += pow(2, k);
}
}
if (sum > 9) {
hexed = sum + 55;
}
else {
hexed = sum + 48;
}
cout << hexed;
}
}
cout << endl;
}
};
int main() {
char ar[strng];
for (int i=0; i<strng; i++) {
if ((i+1) % 8 == 0) {
ar[i] = '0';
}
else {
ar[i] = '1';
}
}
BariBitKari arr(ar);
arr.print_hexa();
arr.print_deci();
return 0;
}
To convert a 128-bit number into a "decimal" string, I'm going to make the assumption that the large decimal value just needs to be contained in a string and that we're only in the "positive" space. Without using a proper big number library, I'll demonstrate a way to convert any array of bytes into a decimal string. It's not the most efficient way because it continually parses, copies, and scans strings of digit characters.
We'll take advantage of the fact that any large number such as the following:
0x87654321 == 2,271,560,481
Can be converted into a series of bytes shifted in 8-bit chunks. Adding back these shifted chunks results in the original value
0x87 << 24 == 0x87000000 == 2,264,924,160
0x65 << 16 == 0x00650000 == 6,619,136
0x43 << 8 == 0x00004300 == 17,152
0x21 << 0 == 0x00000021 == 33
Sum == 0x87654321 == 2,271,560,481
So our strategy for converting a 128-bit number into a string will be to:
Convert the original 16 byte array into 16 strings - each string representing the decimal equivalent for each byte of the array
"Shift left" each string by the appropriate number of bits based on the index of the original byte in the array. Taking advantage of the fact that a left shift is equivalent of multiplying by 2.
Add all these shifted strings together
So to make this work, we introduce a function that can "Add" two strings (consisting only of digits) together:
// s1 and s2 are string consisting of digits chars only ('0'..'9')
// This function will compute the "sum" for s1 and s2 as a string
string SumStringValues(const string& s1, const string& s2)
{
string result;
string str1=s1, str2=s2;
// make str2 the bigger string
if (str1.size() > str2.size())
{
swap(str1, str2);
}
// pad zeros onto the the front of str1 so it's the same size as str2
while (str1.size() < str2.size())
{
str1 = string("0") + str1;
}
// now do the addition operation as loop on these strings
size_t len = str1.size();
bool carry = false;
while (len)
{
len--;
int d1 = str1[len] - '0';
int d2 = str2[len] - '0';
int sum = d1 + d2 + (carry ? 1 : 0);
carry = (sum > 9);
if (carry)
{
sum -= 10;
}
result.push_back('0' + sum);
}
if (carry)
{
result.push_back('1');
}
std::reverse(result.begin(), result.end());
return result;
}
Next, we need a function to do a "shift left" on a decimal string:
// s is a string of digits only (interpreted as decimal number)
// This function will "shift left" the string by N bits
// Basically "multiplying by 2" N times
string ShiftLeftString(const string& s, size_t N)
{
string result = s;
while (N > 0)
{
result = SumStringValues(result, result); // multiply by 2
N--;
}
return result;
}
Then to put it altogether to convert a byte array to a decimal string:
string MakeStringFromByteArray(unsigned char* data, size_t len)
{
string result = "0";
for (size_t i = 0; i < len; i++)
{
auto tmp = to_string((unsigned int)data[i]); // byte to decimal string
tmp = ShiftLeftString(tmp, (len - i - 1) * 8); // shift left
result = SumStringValues(result, tmp); // sum
}
return result;
}
Now let's test it out on the original 32-bit value we used above:
int main()
{
// 0x87654321
unsigned char data[4] = { 0x87,0x65,0x43,0x21 };
cout << MakeStringFromByteArray(data, 4) << endl;
return 0;
}
The resulting program will print out: 2271560481 - same as above.
Now let's try it out on a 16 byte value:
int main()
{
// 0x87654321aabbccddeeff432124681111
unsigned char data[16] = { 0x87,0x65,0x43,0x21,0xaa,0xbb,0xcc,0xdd,0xee,0xff,0x43,0x21,0x24,0x68,0x11,0x11 };
std::cout << MakeStringFromByteArray(data, sizeof(data)) << endl;
return 0;
}
The above prints: 179971563002487956319748178665913454865
And we'll use python to double-check our results:
Python 3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:37:02) [MSC v.1924 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> int("0x87654321aabbccddeeff432124681111", 16)
179971563002487956319748178665913454865
>>>
Looks good to me.
I originally had an implementation that would do the chunking and summation in 32-bit chunks instead of 8-bit chunks. However, little-endian vs. big endian byte order issues get involved. I'll leave that potential optimization as an exercise to do another day.