Storing output from setfill and setw to a string - c++

I am trying to produce binary numbers using C's itoa function and C++ setfill and setw function. If I use only itoa, the output displayed does not have proper 0 padding.
This is a small code snippet.
int s = 8;
for (int i = 1; i<s;i++)
{
itoa(i,buffer,2);
cout<<setfill('0')<<setw(3)<<endl;
cout<<buffer<<endl;
}
Now it does a great job in printing out the output.
If I hadn't used setfill and setw, the formatting would have been something like
1
10
11
100
101
110
111
instead of
001
010
011
100
101
110
111
Now I want to store the padded binary numbers produced and store it into a vector. Is it possible?
I think I have got a solution using bitset, and it works fine.
std::ostringstream oss;
int s = 3;
for (int i = 1; i<s;i++)
{
itoa(i,buffer,2);
oss<<setfill('0')<<setw(3);
oss<<buffer;
string s = oss.str();
cout<<s<<'\n'<<endl;
};
However, I just want to point out that the solution I obtained looks some this!
Can it manipulated by flushing out streams in consecutive iterations. Its just an afterthought.

Consider using a bitset instead of itoa:
#include <bitset>
#include <iostream>
#include <string>
#include <vector>
int main() {
std::vector<std::string> binary_representations;
int s = 8;
for (int i = 1; i < s; i++)
{
binary_representations.push_back(std::bitset<3>(i).to_string());
}
}
EDIT: If you need a variable length, one possibility is
// Note: it might be better to make x unsigned here.
// What do you expect to happen if x < 0?
std::string binary_string(int x, std::size_t len) {
std::string result(len, '0');
for(std::string::reverse_iterator i = result.rbegin(); i != result.rend(); ++i) {
*i = x % 2 + '0';
x /= 2;
}
return result;
}
and then later
binary_representations.push_back(binary_string(i, 3));

Related

Combining elements of an integer array into a single integer variable

I am writing a simple C++ program that should combine all elements of an integer array to form one number. Eg. {4,5,6} --> should be 456. But my output is one less than the original number. i.e instead of 456, I am getting 455. Sometimes my program works fine and sometimes not. Can someone please explain to me what is causing this unpredictible behaviour? Thank You!!
Please take a look at my code:
#include <bits/stdc++.h>
#include <cmath>
using namespace std;
int main()
{
int A[5] = {4,5,6,7,8};
int lengthA = 5;
int num = 0;
for(int x = 0; x < lengthA; x++)
{
num = A[x]*pow(10,lengthA-1-x) + num;
}
printf("%d\n", num ); // My O/P is 45677
}
As mentioned by Bob__, pow is a function for doubles and other floating-point types. For this specific algorithm, instead, we can do this:
int A[5] = {4,5,6,7,8};
int lengthA = 5;
int num = 0;
for(int x = 0; x < lengthA; x++)
{
num = num*10 + A[x];
}
At each step, this multiplies the previous number by 10, and makes the digit correct at that place.
E.g.
Step 1: num = 0*10 + 4 == 4
Step 2: num = 4 * 10 + 5 == 40 + 5 == 45
Step 3: num = 45 * 10 + 6 == 450 + 6 == 456
Step 4: num = 456 * 10 + 7 == 4560 + 7 == 4567
Step 5: num == 4567 * 10 + 8 == 45670 + 8 == 45678
From this simple problem you can already learn quite a bit to improve your C++ code.
Example :
// #include <bits/stdc++.h> // NO : https://stackoverflow.com/questions/31816095/why-should-i-not-include-bits-stdc-h
// using namespace std // NO : https://stackoverflow.com/questions/1452721/why-is-using-namespace-std-considered-bad-practice
#include <iostream> // include only what you need for std::cout
int main()
{
int values[]{ 4,5,6,7,8 }; // no need for an =
int num{ 0 };
// prefer range based for loops
// they will not run out of bounds
// https://en.cppreference.com/w/cpp/language/range-for
for (const int value : values)
{
num *= 10;
num += value;
}
// avoid printf, use std::cout with C++20 std::format for formatting
// https://stackoverflow.com/questions/64042652/is-printf-unsafe-to-use-in-c
// https://en.cppreference.com/w/cpp/utility/format/format
std::cout << "num = " << num << "\n";
return 0;
}
Here is another way for this problem. You can use string to convert this numbers as you need.
With this loop, we convert each number to string and pase it to end of the num string. At the end, you have the number as you need as string. If you need that number as integer, you can conver it back at the end of the loop. To conver string to int you can check this :Converting String to Numbers
#include <iostream> //include to use cout
#include <string> // include to use string
using namespace std;
int main() {
int A[5] = {4,5,6,7,8}; // input array
int lengthA = sizeof(A) / sizeof(A[0]); // size of array
std::string num = "";
for(int i=0; i<lengthA; i++){
num += std::to_string(A[i]);
}
std::cout << "Number : " << num;
}
In addition to jh316's solution;
#include <iostream>
using namespace std;
int A[] = {4,5,6,7,8};
int num = 0;
int main()
{
for(int i: A){
num = num * 10 + i;
}
cout << num;
}
Description of the code:
Initial state of the variable: num = 0
For each iteration the num variable is:
1. num = 0 * 10 + 4 = 4
2. num = 4 * 10 + 5 = 45
3. num = 45 * 10 + 6 = 456
4. num = 456 * 10 + 7 = 4567
5. num = 4567 * 10 + 8 = 45678
Here when you call pow;
pow(10,lengthA-1-x)
your code is probably calling the following overload of std::pow:
double pow ( double base, int iexp );
And as can be seen, it returns a floating-point value which might have a rounding error. I ran your code on my system and the results were correct. However, your code might generate different results on different platforms. And it seems that this is the case in your system.
Instead, you can do this:
#include <cstdio>
#include <array>
#include <span>
constexpr int convertDigitsToNumber( const std::span<const int> digits )
{
int resultNum { };
for ( const auto digit : digits )
{
resultNum = resultNum * 10 + digit;
}
return resultNum;
}
int main( )
{
constexpr std::size_t arraySize { 5 };
// use std::array instead of raw arrays
constexpr std::array<int, arraySize> arrayOfDigits { 4, 5, 6, 7, 8 };
constexpr int num { convertDigitsToNumber( arrayOfDigits ) };
std::printf( "%d\n", num );
return 0;
}
As a result of using constexpr keyword, the above function will be evaluated at compile-time (whenever possible, which is the case in the above code).
Note regarding constexpr: Use const and constexpr keywords wherever possible. It's a very good practice. Read about it here constexpr (C++).
Note: If you are not familiar with std::span then check it out here.

Translation from binary into decimal numbers in C++

I tried to build a function that calculates a binary number stored in a string into a decimal number stored in a long long. I'm thinking that my code should work but it doesn't.
In this example for the binary number 101110111 the decimal number is 375. But my output is completely confusing.
Here is my code:
#include <string>
#include <stdio.h>
#include <math.h>
#include <iostream>
#include <string.h>
int main() {
std::string stringNumber = "101110111";
const char *array = stringNumber.c_str();
int subtrahend = 1;
int potency = 0;
long long result = 0;
for(int i = 0; i < strlen(array); i++) {
result += pow(array[strlen(array) - subtrahend] * 2, potency);
subtrahend++;
potency++;
std::cout << result << std::endl;
}
}
Here is the output:
1
99
9703
894439
93131255
9132339223
894974720087
76039722530902
8583669948348758
What I'm doing wrong here?
'1' != 1 as mentioned in the comments by #churill. '1' == 49. If you are on linux type man ascii in terminal to get the ascii table.
Try this, it is the same code. I just used the stringNumber directly instead of using const char* to it. And I subtracted '0' from the current index. '0' == 48, so if you subtract it, you get the actual 1 or 0 integer value:
auto sz = stringNumber.size();
for(int i = 0; i < sz; i++) {
result += pow((stringNumber[sz - subtrahend] - '0') * 2, potency);
subtrahend++;
potency++;
std::cout << result << std::endl;
}
Moreover, use the methods provided by std::string like .size() instead of doing strlen() on every iteration. Much faster.
In a production environment, I would highly recommend using std::bitset instead of rolling your own solution:
std::string stringNumber = "1111";
std::bitset<64> bits(stringNumber);
bits.to_ulong();
You're forgetting to convert your digits into integers. Plus you really don't need to use C strings.
Here's a better version of the code
int main() {
std::string stringNumber = "101110111";
int subtrahend = 1;
int potency = 0;
long long result = 0;
for(int i = 0; i < stringNumber.size(); i++) {
result += pow(2*(stringNumber[stringNumber.size() - subtrahend] - '0'), potency);
subtrahend++;
potency++;
std::cout << result << std::endl;
}
}
Subtracting '0' from the string digits converts the digit into an integer.
Now for extra credit write a version that doesn't use pow (hint: potency *= 2; instead of potency++;)
c++ way
#include <string>
#include <math.h>
#include <iostream>
using namespace std;
int main() {
std::string stringNumber = "101110111";
long long result = 0;
uint string_length = stringNumber.length();
for(int i = 0; i <string_length; i++) {
if(stringNumber[i]=='1')
{
long pose_value = pow(2, string_length-1-i);
result += pose_value;
}
}
std::cout << result << std::endl;
}

dynamic size of std::bitset initialization [duplicate]

I want to make a simple program that will take number of bits from the input and as an output show binary numbers, written on given bits (example: I type 3: it shows 000, 001, 010, 011, 100, 101, 110, 111).
The only problem I get is in the second for-loop, when I try to assign variable in bitset<bits>, but it wants constant number.
If you could help me find the solution I would be really greatful.
Here's the code:
#include <iostream>
#include <bitset>
#include <cmath>
using namespace std;
int main() {
int maximum_value = 0,x_temp=10;
//cin >> x_temp;
int const bits = x_temp;
for (int i = 1; i <= bits; i++) {
maximum_value += pow(2, bits - i);
}
for (int i = maximum_value; i >= 0; i--)
cout << bitset<bits>(maximum_value - i) << endl;
return 0;
}
A numeric ("non-type", as C++ calls it) template parameter must be a compile-time constant, so you cannot use a user-supplied number. Use a large constant number (e.g. 64) instead. You need another integer that will limit your output:
int x_temp = 10;
cin >> x_temp;
int const bits = 64;
...
Here 64 is some sort of a maximal value you can use, because bitset has a constructor with an unsigned long long argument, which has 64 bits (at least; may be more).
However, if you use int for your intermediate calculations, your code supports a maximum of 14 bits reliably (without overflow). If you want to support more than 14 bits (e.g. 64), use a larger type, like uint32_t or uint64_t.
A problem with holding more bits than needed is that the additional bits will be displayed. To cut them out, use substr:
cout << bitset<64>(...).to_string().substr(64 - x_temp);
Here to_string converts it to string with 64 characters, and substr cuts the last characters, whose number is x_temp.
You have to define const int bits=10; as a global constant :
#include <iostream>
#include <math.h>
#include <bitset>
using namespace std;
const unsigned bits=10;
int main() {
int maximum_value = 0,x_temp=10;
for (int i = 1; i <= bits; i++) {
maximum_value += pow(2, bits - i);
}
for (int i = maximum_value; i >= 0; i--)
cout << bitset<bits>(maximum_value - i) << endl;
return 0;
}

How do I count how often a string of letters occurs in a .txt file? (in C++)

I searched for ways to count how often a string of letters appears in a .txt file and found (among others) this thread: Count the number of times each word occurs in a file
which deals with the problem by counting words (which are separated by spaces).
However, I need to do something slightly different:
I have a .txt file containing billions of letters without any formatting (no spaces, no puntuation, no line breaks, no hard returns, etc.), just a loooooong line of the letters a, g, t and c (i.e: a DNA sequence ;)).
Now I want to write a program that goes through the entire sequence and count how often each possible continuous sequence of 9 letters appears in that file.
Yes, there are 4^9 possible combinations of 9-letter 'words' made up of the characters A, G, T and C, but I only want to output the top 1000.
Since there are no spaces or anything, I would have to go through the file letter-by-letter and examine all the 9-letter 'words' that appear, i.e.:
ATAGAGCTAGATCCCTAGCTAGTGACTA
contains the sequences:
ATAGAGCTA, TAGAGCTAG, AGAGCTAGA, etc.
I hope you know what I mean, it's hard for me to describe the same in English, since it is not my native language.
Best regards and thank you all in advance!
Compared to billions, 2^18, or 256k seems suddenly small. The good news is that it means your histogram can be stored in about 1 MB of data. A simple approach would be to convert each letter to a 2-bit representation, assuming your file only contains AGCT, and none of the RYMK... shorthands and wildcards.
This is what this 'esquisse' does. It packs the 9 bytes of text into an 18 bit value and increments the corresponding histogram bin. To speed up[ conversion a bit, it reads 4 bytes and uses a lookup table to convert 4 glyphs at a time.
I don't know how fast this will run, but it should be reasonable. I haven't tested it, but I know it compiles, at least under gcc. There is no printout, but there is a helper function to unpack sequence packed binary format back to text.
It should give you at least a good starting point
#include <vector>
#include <array>
#include <algorithm>
#include <iostream>
#include <fstream>
#include <exception>
namespace dna {
// helpers to convert nucleotides to packed binary form
enum Nucleotide : uint8_t { A, G, C, T };
uint8_t simple_lut[4][256] = {};
void init_simple_lut()
{
for (size_t i = 0 ; i < 4; ++i)
{
simple_lut[i]['A'] = A << (i * 2);
simple_lut[i]['C'] = C << (i * 2);
simple_lut[i]['G'] = G << (i * 2);
simple_lut[i]['T'] = T << (i * 2);
}
}
uint32_t pack4(const char(&seq)[4])
{
return simple_lut[0][seq[0]]
+ simple_lut[1][seq[1]]
+ simple_lut[2][seq[2]]
+ simple_lut[3][seq[3]];
}
// you can use this to convert the historghrtam
// index back to text.
std::string hist_index2string(uint32_t n)
{
std::string result;
result.reserve(9);
for (size_t i = 0; i < 9; ++i, n >>= 2)
{
switch (n & 0x03)
{
case A: result.insert(result.begin(), 'A'); break;
case C: result.insert(result.begin(), 'C'); break;
case G: result.insert(result.begin(), 'G'); break;
case T: result.insert(result.begin(), 'T'); break;
default:
throw std::runtime_error{ "totally unexpected error while unpacking index !!" };
}
}
return result;
}
}
int main(int argc, const char**argv, const char**)
{
if (argc < 2)
{
std::cerr << "Usage: prog_name <input_file> <output_file>\n";
return 3;
}
using dna::pack4;
dna::init_simple_lut();
std::vector<uint32_t> result;
try
{
result.resize(1 << 18);
std::ifstream ifs(argv[1]);
// read 4 bytes at a time, convert to packed bits representation
// then rotate in bits 2 by 2 in our 18 bits buffer.
// increment coresponding bin by 1
const uint32_t MASK{ (1 << 19) - 1 };
const std::streamsize RB{ 4 };
uint32_t accu{};
uint32_t packed{};
// we need to load at least 9 bytes to 'prime' the engine
char buffer[4];
ifs.read(buffer, RB);
accu = pack4(buffer) << 8;
ifs.read(buffer, RB);
accu |= pack4(buffer);
if (ifs.gcount() != RB)
{
throw std::runtime_error{ " input file is too short "};
}
ifs.read(buffer, RB);
while (ifs.gcount() != 0)
{
packed = pack4(buffer);
for (size_t i = 0; i < (size_t)ifs.gcount(); ++i)
{
accu <<= 2;
accu |= packed & 0x03;
packed >>= 2;
accu &= MASK;
++result[accu];
}
ifs.read(buffer.pch, RB);
}
ifs.close();
// histogram is compiled. store data to another file...
// you can crate a secondary table of { index and count }
// it's only 5MB long and use partial_sort to extract the first
// 1000.
}
catch(std::exception& e)
{
std::cerr << "Error \'" << e.what() << "\' while reading file.\n";
return 3;
}
return 0;
}
This algorithm can be adapted to run on multiple threads, by opening the file in multiple streams with the proper share configuration, and running the loop on bits of the file. Care must be taken for the 16 bytes seams at the end of the process.
If running in parallel, the inner loop is so short that it may be a good idea to provide each thread with its own histogram and merge the results at the end, otherwise, the locking overhead would slow things quite a bit.
[EDIT] Silly me I had the packed binary lookup wrong.
[EDIT2] replaced the packed lut with a faster version.
This works for you,
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
int main ()
{
string line;
int sum=0;
ifstream inData ;
inData.open("countletters.txt");
while(!inData.eof())
{
getline(inData,line);
int numofChars= line.length();
for (unsigned int n = 0; n<line.length();n++)
{
if (line.at(n) == ' ')
{
numofChars--;
}
}
sum=numofChars+sum;
}
cout << "Number of characters: "<< sum << endl;
return 0 ;
}

putting an integer into a string

I'm trying to put an integer into a string by separating its digits and putting them by order in a string of size 3
this is my code:
char pont[4];
void convertInteger(int number){
int temp100=0;
int temp10=0;
int ascii100=0;
int ascii10=0;
if (number>=100) {
temp100=number%100;
ascii100=temp100;
pont[0]=ascii100+48;
number-=temp100*100;
temp10=number%10;
ascii10=temp10;
pont[1]=ascii10+48;
number-=temp10*10;
pont[2]=number+48;
}
if (number>=10) {
pont[0]=48;
temp10=number%10;
ascii10=temp10;
pont[1]=ascii10+48;
number-=temp10*10;
pont[2]=number+48;
}
else{
pont[0]=48;
pont[1]=48;
pont[2]=number+48;
}
}
here's an example of what's suppose to happen:
number = 356
temp100 = 356%100 = 3
ascii100 = 3
pont[0]= ascii100 = 3
temp100 = 3*100 = 300
number = 365 - 300 = 56
temp10 = 56%10 = 5
ascii10 = 5
pont[1]= ascii10 = 5
temp10 = 5*10 = 50
number = 56 - 50 = 6
pont[2]=6
I might have an error somewhere and not seeing it (don't know why) ...
This is suppose to be C++ by the way. I might be mixing this up with C language...
Thanks in advance
Probably the mistake that you're overlooking right now:
pont[2]=number+48;
}
if (number>=10) { /* should be else if */
pont[0]=48;
However, I'd like to suggest a different approach; you don't care that the value is above 100, 10, etc., as 0 is still a useful value -- if you don't mind zero-padding your answer.
Consider the following numbers:
int hundreds = (number % 1000) / 100;
int tens = (number % 100) / 10;
int units = (number % 10);
All built-in types know how to represent themselves to std::ostream. They can be formatted for precision, converted to different representations, etc.
This uniform handling allows us to write built-ins to the standard output:
#include <iostream>
int main()
{
std::cout << 356 << std::endl; // outputting an integer
return 0;
}
Output:
356
We can stream to more than just cout. There is a standard class called std::ostringstream, which we can use just like cout, but it gives us an object which can be converted to a string, rather than sending everything to standard output:
#include <sstream>
#include <iostream>
int main()
{
std::ostringstream oss;
oss << 356;
std::string number = oss.str(); // convert the stream to a string
std::cout << "Length: " << number.size() << std::endl;
std::cout << number << std::endl; // outputting a string
return 0;
}
Output:
Length: 3
356