Why does this program not give out binary output [duplicate] - c++

This question already has answers here:
Why isn't `int pow(int base, int exponent)` in the standard C++ libraries?
(11 answers)
The most efficient way to implement an integer based power function pow(int, int)
(21 answers)
Closed 2 days ago.
I'm new to C++ and this program that I wrote does not give the output, which is a binary form of an integer input.
It does give the result in Python Tutor. But in VSCode the result is always one less than the actual binary output.
Example-
5 = 100
6 = 109
17 = 10000
#include <iostream>
#include <cmath>
int main(){
int n;
std::cout << "Enter n:- ";
std::cin >> n;
int ans = 0;
int i = 0;
while (n != 0){
int bit = n & 1;
ans = (bit * pow(10, i)) + ans;
n = n >> 1;
i++;
}
std::cout << ans;
return 0;
}
What did I do wrong?

Related

Write integers to binary file in Big Endian or Little endian form C++ [duplicate]

This question already has answers here:
How to check whether a system is big endian or little endian?
(18 answers)
How do I convert between big-endian and little-endian values in C++?
(35 answers)
Closed 5 months ago.
My teacher told me to:
Write the first M negative odd numbers in 16bit and little endian form
Write the first N positive even numbers in 32bit and big endian form
And this is my code thus far. I was able to do everything except for that endianess part
#include <iostream>
#include <fstream>
using namespace std;
int main() {
//writing to file section
int m, n;
ofstream bin("D://Ex02.bin", ios::binary | ios::out);
cin >> m >> n;
//Write the first M negative odd numbers in 16bit (2bytes)
for (int i = 0; i <= m; i++) {
int I = -2 * i - 1;
bin.write((char*)&I, 2);
}
//Write the first N positive even numbers in 32bit (4bytes)
for (int i = 0; i <= n; i++) {
int I = 2 * i;
bin.write((char*)&I, 4);
}
bin.close();
//test section
ifstream test("D://Ex02.bin", ios::binary | ios::in);
//Read 16 bit numbers
int x = -pow(2,16) + 1;
for (int i = 0; i <= m; i++) {
test.read((char*)&x, 2);
cout << x << endl;
}
//Read 32 bit numbers
x = 0;
for (int i = 0; i <= n; i++) {
test.read((char*)&x, 4);
cout << x << endl;
}
test.close();
return 0;
}
The problem is that I have no idea how to write integer in specific endian form.
Can someone help me pls

bitwise conversion of decimal numbers to binary

The code works fine for some values like for eg 10 the output is 1010 which is correct but for 20 or 50 or 51 the output is wrong or atleast seems so to me.
please help !
#include <iostream>
#include <math.h>
using namespace std;
int main()
{
int n;
cin >> n;
int ans = 0;
int i = 0;
while (n != 0)
{
int bit = n & 1;
ans = (bit * pow(10, i)) + ans;
n = n >> 1;
i++;
}
cout << " Answer is " << ans << endl;
}
change datatype of ans.
float ans = 0;
After trying to run your code, it works. 51 correctly comes out as 110011 and 50 as 110010 and 20 as 10100. Those are the correct bit values, you can try calculating them by counting or by just adding 10 (i.e. 1010) in different ways.

Fraction pattern in c++ [duplicate]

This question already has answers here:
C++. Dividing 1 by any number gives 0
(3 answers)
Closed 1 year ago.
I need to write a program to run this pattern in c++:
S=1/2+2/3+3/4+4/5+...+N-1/N
I have tried but my code is showing 0.
And its the code that I have written:
#include <iostream>
using namespace std;
int main()
{
unsigned int N;
float S=0;
cout << "Enter N:";
cin >> N;
for (int I = 2; I <= N; I++)
{
S = S + (I - 1) / I;
}
cout << S;
return 0;
}
I have to write it with for-loop, while and do-while
(I - 1) / I only contains integers, therefore any remainder is discarded.
You can avoid this by simply subtracting - 1.f off of I instead.

error: static_assert failed due to requirement '!is_signed<int>::value' "" static_assert((!is_signed<_Tp>::value), ""); [duplicate]

This question already has answers here:
Why __gcd() is throwing error in macOS mojave?
(3 answers)
Closed 1 year ago.
note: in instantiation of function template specialization 'std::__1::__gcd' requested here
while (__gcd(n, k) <= 1) n++;
The above line was displayed along with the error shown earlier, I know there are many other methods to calculate gcd but I am confused why it's not working for (__gcd() ).
I am using MacBook : OS -> BgSur
Apple clang version 12.0.0 (clang-1200.0.32.29)
#include <bits/stdc++.h>
#include <cmath>
using namespace std;
int getSum(int n)
{
int sum;
for (sum = 0; n > 0; sum += n % 10, n /= 10)
;
return sum;
}
int main() {
int t, n;
cin >> t;
while (t--) {
cin >> n;
int k = getSum(n);
while (__gcd(n, k) <= 1) n++;
cout << n << endl;
}
}
here, getSum(125) = 1+2+5 = 8
INPUT :
3
11
31
75
OUTPUT:
12
32
75
EXPECTED OUTPUT:
12
33
75
As static assert suggests, your __gcd() implementation requires unsigned types for arguments (i.e. algorithm operates on non-negative numbers only), so replacing int with unsigned int should help - or you could replace
#include <bits/stdc++.h> // never include this
#include <cmath>
with
#include <iostream>
#include <numeric> // std::gcd
and use the GCD function that is included in the standard library instead:
while (std::gcd(n, k) <= 1) n++;

C++ [Recursive] Write a number as sum of ascending powers of 2 [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
So as the title says,I have to write a number as sum of ascending powers of 2.
For instance, if I input 10, 25 , 173
10 = 2 + 8
25 = 1 + 8 + 16
173 = 1 + 4 + 8 + 32 + 128
So this is what I have done:
#include <iostream>
using namespace std;
int x,c;
int v[500];
void Rezolva(int putere)
{
if(putere * 2 <= x)
Rezolva(putere * 2);
if(x - putere >= 0)
{
c++;
v[c] = putere;
x -= putere;
}
}
int main()
{
cin >> x;
c = 0;
Rezolva(1);
for(int i = c; i >= 1; i--)
cout << v[i] << " ";
return 0;
}
I have a program which gives my code some tests and verifies if it's correct. To one test, it says that I exit the array. Is there any way to get rid of the array or to fix this problem ? If I didn't use the array it would have been in descending order.
The error isn't a compiler error.
Caught fatal signal 11 is what I receive when my program checks some tests on the code
For values higher than 10^9 the program crashes so you need to change from int to long long.
#include <iostream>
using namespace std;
long long x,c;
long long v[500];
void Rezolva(long long putere)
{
if (putere * 2 <= x)
Rezolva(putere * 2);
if (x - putere >= 0)
{
v[c++] = putere;
x -= putere;
}
}
int main()
{
cin >> x;
c = 0;
Rezolva(1);
for(int i = c - 1; i >= 0; i--)
cout << v[i] << " ";
return 0;
}
All in all, a simple overflow was the cause.
It was a simple overflow. And by the way a way easier way to do it is have a long long unsigned int
#include <bitset>
unsigned long long x = input;
std::cout << x << " = ";
std::string str = std::bitset<64>(x).to_string();
for (int i = str.size()-1; i >= 0; --i)
if(str[i]-'0')
std::cout << (2ull << i) << " + ";
if (x)
std::cout << char(8)<<char(8) << std::endl; //DELETING LAST "+" for non-zero x
else
std::cout << "0\n";
If you have a fixed size integer (e.g. int etc.) then you can just start at the greatest possible power of two, and if your number is bigger than that power, subtract the power of 2. Then go to the next power of two.
This is similar to how you would normally write numbers yourself starting from the most significant digit. So also works for how numbers are printed in base 16 (hex), 10, binary literals, etc.
int main() {
unsigned x = 173;
std::cout << x << " = ";
bool first = true;
// get the max power from a proper constant
for (unsigned power = 0x80000000; power > 0; power >>= 1)
{
if (power <= x)
{
if (!first) std::cout << " + ";
std::cout << power;
x -= power;
first = false;
}
}
assert(x == 0);
std::cout << std::endl;
}
Outputs:
173 = 128 + 32 + 8 + 4 + 1