I am trying to make a farey seq program with C++ list library
My program works fine when I use the first level, however, it crashes after that with all other levels for some reason.
I am using visual studio as a compiler. I tried the debugger mode as well as the resource.
The program doesn't crash in the resource mode, however, it doesn't give me the even numbers levels output. In addition, it gives me half of the odds levels output for some reason.
I want to fix this problem and I want it to work in the dubgger mode as well.
Any suggestions?
Here is what I've done so far
class RULLZ: public list<Fraction>
{
private:
list<Fraction>::iterator head,tail,buf,buf1;
public :
Farey2()
{
this->push_front( Fraction(1,1));
this->push_front( Fraction(0,1));
}
void Add(int level)
{
Fraction *tmp,*tmp2;
buf=this->first();
for(int i=0;i<level-1;i++)
{
head=this->first();
tail=this->last();
while(head!=tail)
{
tmp=new Fraction(head->num,head->den);
head++;
if(tmp->den+head->den<=level)
{
tmp2=new Fraction(tmp->num+head->num,tmp->den+head->den);
this->insert(head,*tmp2);
head--;
}
}
this->pop_back();
}
}
friend ostream& operator<<(ostream& out, RULLZ& f)
{
for(list<Fraction>::iterator it=f.first();it !=f.last();it++)
out <<*it;
return out;
}
};
class RULLZ: public list<Fraction>
Before even looking at your question, the above code is a problem. The C++ standard containers are deliberately not designed to be base classes (none of them have a virtual destructor), so this will cause problems. For reasons why you should not publicly derive from a standard container, see the following:
When is it "okay"?
The risks
Why it is a bad design decision
Coding Guidelines (Page ~60)
Why inheritance is usually not the right approach
It appears you want the Add function to add the next X number of fractions together (if I understand your intent correctly). A better way to do that is to use a std::stack:
std::stack<Fraction, std::list<Fraction>> expression;
// add fractions to your stack here
Fraction Add(unsigned int count)
{
// assume count is greater than 1 (as adding 0 fractions makes no sense, and adding 1 is trivial)
Fraction result(0, 1);
for (int i = 0; i < count; ++i)
{
Fraction a = expression.top();
expression.pop();
Fraction b = expression.top();
expression.pop();
result += a + b; // assume operators + and += have been implemented for Fraction
}
expression.push(result);
return result;
}
Another problem you appear to have is a logic problem (again, assuming I understand your intent correctly):
for(int i=0;i<level-1;i++)
If level is the number of fractions you want to add, and you pass in 2, this loop will only include the first one. That is, it will not add fractions 0 and 1, but rather just grab fraction 0 and return it. I think you meant for this to be
for(int i=0; i < level; i++)
Which will grab both fractions 0 and 1 to add together.
I'm not sure where, but your logic for generating the series appears to be off. A more simple approach can be found here:
#include <algorithm>
#include <cstdint>
#include <iterator>
#include <iostream>
#include <vector>
struct Fraction
{
std::uint32_t Numerator;
std::uint32_t Denominator;
Fraction(std::uint32_t n, std::uint32_t d) : Numerator(n), Denominator(d) { }
};
std::ostream& operator<<(std::ostream& os, const Fraction& f)
{
os << "(" << f.Numerator << " / " << f.Denominator << ")";
return os;
}
typedef std::vector<Fraction> FareySeries;
FareySeries generate_series(std::uint32_t depth)
{
std::uint32_t a = 0;
std::uint32_t b = 1;
std::uint32_t c = 1;
std::uint32_t d = depth;
FareySeries results;
results.emplace_back(a, b);
while (c <= depth)
{
std::uint32_t k = (depth + b) / d;
std::uint32_t nc = (k * c) - a;
std::uint32_t nd = (k * d) - b;
a = c;
b = d;
c = nc;
d = nd;
results.emplace_back(a, b);
}
return results;
}
int main()
{
const std::uint32_t DEPTH = 4;
FareySeries series = generate_series(DEPTH);
std::copy(series.begin(), series.end(), std::ostream_iterator<Fraction>(std::cout, "\n"));
return 0;
}
Related
I want to create code that will help me get numbers bigger than MAXINT. I heard about that I can use Binary Code Decimal to do this, and then every two of decimal numbers(converted to BCD) of the bigger number keep in char. But how to do this? I should give string as input, then convert somehow to BCD every single decimal number? And how can I put two converted decimal numbers to one char? I'm new in C++ and don't know how can i do it.
P.S. I don't want to use libraries which are "special" for that kind of problems.
As it turns out, this is actually quite simple. How about we try to take it to the next level?
Below is an implementation of a BCD number with infinite(or as much as memory can hold) size. It only supports positive integer numbers. I'll leave extending this to support negative numbers(or real numbers) as an exercise.
First things first: Yes, we want to get our number as a string and then build it up from that. Since it's only an integer, this is actually quite easy to do. We primarily create a helper function to aid us in identifying all the digits.
int char_to_int(const char c) {
int ret = c - '0';
if(ret > 9 || ret < 0) throw 1; // for simplicity. Use a class derived from std::exception instead.
return ret;
}
We can now try to implement input and output for our big number.
First Try
Having that helper guy, turning a string to a BCD-encoded buffer is easy. A common implementation may look like this:
int main() {
unsigned char bignum[10]; // stores at most 20 BCD digits.
std::memset(bignum, 0, sizeof(bignum));
std::string input;
std::cin >> input;
try {
if (input.size() > 20) throw 1; // Avoid problems with buffer overflow.
for (int i=1;i<=input.size();i++) {
int n = char_to_int(input[input.size()-i]);
bignum[sizeof(bignum) - (i+1)/2] |= n << (i%2)*4; // These are bitwise operations. Google them!
}
}
catch(int) {
std::cout << "ERROR: Invalid input.\n";
return 0; // Exit cleanly.
}
// bignum is now filled. Let's print it to prove.
for (int i=0;i<sizeof(bignum);i++) {
int first_digit = bignum[i] & '\x0F'; // Right side, doesn't need to shift.
int second_digit = (bignum[i] & '\xF0')>>4; // Left side, shifted.
std::cout << first_digit << second_digit;
}
}
This is not very space-efficient, however. Note that we have to store all the 20 digits, even if our number is small! What if we needed 1000 digits? What if we need 1000 numbers that may or may not have these 1000 digits? It is also error-prone: Look that we had to remmember to initialize the array, and do a bounds check before conversion to avoid a buffer overflow.
Second Try
We can improve our implementation using a std::vector:
int main() {
std::vector<unsigned char> bignum; // stores any quantity of digits.
std::string input;
std::cin >> input;
try {
// For an odd number of digits we want a trailling zero at the end.
if(input.size()%2) n.num_vec.push_back(char_to_int(input[0]));
for (unsigned i=input.size()%2;i<input.size();i+=2) {
int left = char_to_int(input[i]);
int right = char_to_int(input[i+1]);
n.num_vec.push_back(0);
n.num_vec.back() = left << 4;
n.num_vec.back() |= right;
}
}
catch(int) {
std::cout << "ERROR: Invalid input.\n";
exit(0); // Exit cleanly.
}
// bignum is now filled. Let's print it to prove.
for (unsigned i=0;i<bignum.size();++i) {
// Notice that we inverted this from the previous one! Try to think why.
int first_digit = (bignum[i] & '\xF0')>>4; // Left side, shifted.
int second_digit = bignum[i] & '\x0F'; // Right side, doesn't need to shift.
if(i || first_digit) std::cout << first_digit; // avoid printing trailling 0.
std::cout << second_digit;
}
}
Lookin' good, but that is too cumbersome. Ideally, the bignumber user shouldn't have to deal with the vector positions and all that mumbo-jumbo. We want to write code that behaves like:
int main() {
int a;
cin >> a;
cout << a;
}
And it should just work.
Third Try
Turns out this is possible! Just wrap bignum into a class, with some helpful operators:
class bignum {
std::vector<unsigned char> num_vec;
template<typename T>
friend T& operator<<(T& is, bignum& n);
template<typename T>
friend T& operator>>(T& os, bignum& n);
};
// Get input from any object that behaves like an std::istream (i.e.: std::cin)
template<typename T>
T& operator>>(T& is, bignum& n) {
std::string input;
is >> input;
n.num_vec.reserve(input.size());
if(input.size()%2) n.num_vec.push_back(char_to_int(input[0]));
for (unsigned i=input.size()%2;i<input.size();i+=2) {
int left = char_to_int(input[i]);
int right = (i+1) != input.size()?char_to_int(input[i+1]):0; // If odd number of digits, avoid getting garbage.
n.num_vec.push_back(0);
n.num_vec.back() = left << 4;
n.num_vec.back() |= right;
}
return is;
}
// Output to any object that behaves like an std::ostream (i.e.: std::cout)
template<typename T>
T& operator<<(T& os, bignum& n) {
for (unsigned i=0;i<n.num_vec.size();++i) {
int first_digit = (n.num_vec[i] & '\xF0')>>4; // Left side, shifted.
int second_digit = n.num_vec[i] & '\x0F'; // Right side, doesn't need to shift.
if(i || first_digit) os << first_digit; // avoid printing trailling 0.
os << second_digit;
}
return os;
}
Then our main function looks much more readable:
int main() {
bignum a;
try {
std::cin >> a;
}
catch(int) {
std::cout << "ERROR: Invalid input.\n";
return 0; // Exit cleanly.
}
std::cout << a;
}
Epilogue
And here we have it. Of course with no addition, multiplication, etc. operators, it isn't very useful. I'll leave them as an exercise. Code, code and code some more, and soon this will look like a piece of cake to you.
Please feel free to ask any questions. Good coding!
It's been a while since I used C++. I was asked for job interview to create a C++ struct for a downsampling routine which would meet the following interface:
struct deterministic_sample
{
deterministic_rate( double rate );
bool operator()();
};
-- with the following behaviour:
We have an object of that class: deterministic_sample s;
We call s() N times, and it returns true, M times. M / N is roughly equal to the rate
The sequence is deterministic, not random and should be the same each time
The class should be "industrial strength", for use on a busy stream.
My solution, version 2:
#include <iostream>
#include <cmath>
#include <climits>
using namespace std;
struct deterministic_sample
{
double sampRate;
int index;
deterministic_sample() {
sampRate = 0.1;
index = 0;
}
void deterministic_rate( double rate ) {
this->sampRate = rate; // Set the ivar. Not so necessary to hide data, but just complying with the interface, as given...
this->index = 0; // Reset the incrementer
};
bool operator()() {
if (this->index == INT_MAX) {
this->index = 0;
}
double multiple = this->index * this->sampRate;
this->index++; // Increment the index
if (fmod(multiple, 1) < this->sampRate) {
return true;
} else {
return false;
}
};
};
int main()
{
deterministic_sample s; // Create a sampler
s.deterministic_rate(0.253); // Set the rate
int tcnt = 0; // Count of True
int fcnt = 0; // Count of False
for (int i = 0; i < 10000; i++) {
bool o = s();
if (o) {
tcnt++;
} else {
fcnt++;
}
}
cout << "Trues: " << tcnt << endl;
cout << "Falses: " << fcnt << endl;
cout << "Ratio: " << ((float)tcnt / (float)(tcnt + fcnt)) << endl; // Show M / N
return 0;
}
The interviewer said this v2 code "partly" addressed the requirements. v1 didn't have the constructor (my error), and didn't deal with overflow of the int ivar.
What have I missed here to make this class robust/correct? I think it is some aspect of "industrial strength" that I've missed.
ps. for any ethical types, I've already submitted my second-chance attempt... It's just bothering me to know why this was "partly"...
What you have is far more complex than necessary. All you need to do is keep track of the current position, and return true when it goes past the threshold.
struct deterministic_sample
{
double sampRate;
double position;
deterministic_sample() : sampRate(0.1), position(0.0) {
}
void deterministic_rate( double rate ) {
assert(rate <= 1.0); // Only one output is allowed per input
sampRate = rate; // Set the ivar. Not so necessary to hide data, but just complying with the interface, as given...
// No need to reset the position, it will work with changing rates
};
bool operator()() {
position += sampRate;
if (position < 1.0)
return false;
position -= 1.0;
return true;
}
};
Use unsigned and integer overflow is a well-defined wraparound. This is very fast on normal CPU's.
The second problem I see is the mix of floating-point and integer math. That's not really efficient. It may be more efficient to store multiple as a member and just do multiple += rate. This saves you one integer to double conversion.
However, the fmod is still quite expensive. You can avoid that by keeping int trueSoFar instead. Now the rate so far is double(trueSoFar)/double(index) and you can check double(trueSoFar)/double(index) > rate or more efficiently trueSoFar> int(index * rate). As we already saw, rate*index can be replaced by multiple += rate.
This means we have one double addition (multiple +=), one FP to int conversion int(multiple) and one integer comparison.
[edit]
You can also avoid FP math altogether by keeping a 32/32 rational approximation of rate, and comparing that to the realised rate (again stored as a 32/32 ratio). Since a/b > c/d when a*d > b*c you can use a 64 bit multiply here. Even better, for the target ratio you can choose 2^32 as a fixed denominator (i.e. unsigned long targetRate = rate*pow(2^32), b=2^32 implicit) so that you now have unsigned long(unsigned long long(a)*index) >> 32) > trueSoFar. Even on a 32 bit CPU, this is fairly quick. >>32 is a no-op there.
OK, so it seems there are some improvements to the efficiency which could be made (certainly), that "industrial strength" has some implications though nothing concrete (possibly the problem...), or that the constructor was incorrectly named in the question (also possible).
In any case, no one has jumped on some glaring omission that I made to my constructor (like, I see there are two ways to do a C++ constructor; you should do both to be really bullet-proof, etc.)
I guess I'll just cross my fingers and hope I still progress to the soft-skills interview!
Thanks all.
Consider the following sample code (I actually work with longer binary strings but this is enough to explain the problem):
void enumerateAllSubsets(unsigned char d) {
unsigned char n = 0;
do {
cout<<binaryPrint(n)<<",";
} while ( n = (n - d) & d );
}
The function (due to Knuth) effectively loops through all subsets of a binary string;
For example :
33 = '00100001' in binary and enumerateAllSubsets(33) would produce:
00000000, 00100000, 00000001, 00100001.
I need to write a #define which would make
macroEnumerate(n,33)
cout<<binaryPrint(n)<<",";
behave in a way equivalent to enumerateAllSubsets(33). (well, the order might be rearranged)
Basically i need the ability to perform various operations on subsets of a set.
Doing something similar with for-loops is trivial:
for(int i=0;i < a.size();i++)
foo(a[i]);
can be replaced with:
#define foreach(index,container) for(int index=0;index < container.size();index++)
...
foreach(i,a)
foo(a[i]);
The problem with enumerateAllSubsets() is that the loop body needs to be executed once unconditionally and as a result the do-while cannot be rewritten as for.
I know that the problem can be solved by STL-style templated function and a lambda passed to it (similar to STL for_each function), but some badass #define macro seems like a cleaner solution.
Assuming C++11, define a range object:
#include <iostream>
#include <iterator>
#include <cstdlib>
template <typename T>
class Subsets {
public:
Subsets(T d, T n = 0) : d_(d), n_(n) { }
Subsets begin() const { return *this; }
Subsets end() const { return {0, 0}; }
bool operator!=(Subsets const & i) const { return d_ != i.d_ || n_ != i.n_; }
Subsets & operator++() {
if (!(n_ = (n_ - d_) & d_)) d_ = 0;
return *this;
}
T operator*() const { return n_; }
private:
T d_, n_;
};
template <typename T>
inline Subsets<T> make_subsets(T t) { return Subsets<T>(t); }
int main(int /*argc*/, char * argv[]) {
int d = atoi(argv[1]);
for (auto i : make_subsets(d))
std::cout << i << "\n";
}
I've made it quite general in case you want to work with, e.g., uint64_t.
One option would be to use a for loop that always runs at least once, such as this:
for (bool once = true; once? (once = false, true) : (n = (n - d) & d); )
// loop body
On the first iteration, the once variable gets cleared and the expression evaluates to true, so the loop executes. From that point forward, the actual test-and-step logic controls the loop.
From here, rewriting this to a macro should be a lot easier.
Hope this helps!
You can do a multiline macro that uses an expression, like this:
#define macroenum(n, d, expr ) \
n = 0; \
do { \
(expr); \
} while (n = (n -d) & d) \
; \
int main(int argc, const char* argv[])
{
enumerateAllSubsets(33);
int n;
macroenum(n, 33, cout << n << ",");
}
As others have mentioned this will not be considered very clean by many - amongst other things, it relies on the variable 'n' existing in scope. You may need to wrap expr in another set of parens, but I tested it with g++ and got the same output as enumerateAllSubsets.
It seems like your goal is to be able to do something like enumerateAllSubsets but change the action performed for each iteration.
In C++ you can do this with a function in the header file:
template<typename Func>
inline void enumerateAllSubsets(unsigned char d, Func f)
{
unsigned char n = 0;
do { f(n); } while ( n = (n - d) & d );
}
Sample usage:
enumerateAllSubsets(33, [](auto n) { cout << binaryPrint(n) << ','; } );
I have the following Python snippet that I would like to reproduce using C++:
from itertools import count, imap
source = count(1)
pipe1 = imap(lambda x: 2 * x, source)
pipe2 = imap(lambda x: x + 1, pipe1)
sink = imap(lambda x: 3 * x, pipe2)
for i in sink:
print i
I've heard of Boost Phoenix, but I couldn't find an example of a lazy transform behaving in the same way as Python's imap.
Edit: to clarify my question, the idea is not only to apply functions in sequence using a for, but rather to be able to use algorithms like std::transform on infinite generators. The way the functions are composed (in a more functional language like dialect) is also important, as the next step is function composition.
Update: thanks bradgonesurfing, David Brown, and Xeo for the amazing answers! I chose Xeo's because it's the most concise and it gets me right where I wanted to be, but David's was very important into getting the concepts through. Also, bradgonesurfing's tipped Boost::Range :).
Employing Boost.Range:
int main(){
auto map = boost::adaptors::transformed; // shorten the name
auto sink = generate(1) | map([](int x){ return 2*x; })
| map([](int x){ return x+1; })
| map([](int x){ return 3*x; });
for(auto i : sink)
std::cout << i << "\n";
}
Live example including the generate function.
I think the most idiomatic way to do this in C++ is with iterators. Here is a basic iterator class that takes an iterator and applies a function to its result:
template<class Iterator, class Function>
class LazyIterMap
{
private:
Iterator i;
Function f;
public:
LazyIterMap(Iterator i, Function f) : i(i), f(f) {}
decltype(f(*i)) operator* () { return f(*i); }
void operator++ () { ++i; }
};
template<class Iterator, class Function>
LazyIterMap<Iterator, Function> makeLazyIterMap(Iterator i, Function f)
{
return LazyIterMap<Iterator, Function>(i, f);
}
This is just a basic example and is still incomplete as it has no way to check if you've reached the end of the iterable sequence.
Here's a recreation of your example python code (also defining a simple infinite counter class).
#include <iostream>
class Counter
{
public:
Counter (int start) : value(start) {}
int operator* () { return value; }
void operator++ () { ++value; }
private:
int value;
};
int main(int argc, char const *argv[])
{
Counter source(0);
auto pipe1 = makeLazyIterMap(source, [](int n) { return 2 * n; });
auto pipe2 = makeLazyIterMap(pipe1, [](int n) { return n + 1; });
auto sink = makeLazyIterMap(pipe2, [](int n) { return 3 * n; });
for (int i = 0; i < 10; ++i, ++sink)
{
std::cout << *sink << std::endl;
}
}
Apart from the class definitions (which are just reproducing what the python library functions do), the code is about as long as the python version.
I think the boost::rangex library is what you are looking for. It should work nicely with the new c++lambda syntax.
int pipe1(int val) {
return 2*val;
}
int pipe2(int val) {
return val+1;
}
int sink(int val) {
return val*3;
}
for(int i=0; i < SOME_MAX; ++i)
{
cout << sink(pipe2(pipe1(i))) << endl;
}
I know, it's not quite what you were expecting, but it certainly evaluates at the time you want it to, although not with an iterator iterface. A very related article is this:
Component programming in D
Edit 6/Nov/12:
An alternative, still sticking to bare C++, is to use function pointers and construct your own piping for the above functions (vector of function pointers from SO q: How can I store function pointer in vector?):
typedef std::vector<int (*)(int)> funcVec;
int runPipe(funcVec funcs, int sinkVal) {
int running = sinkVal;
for(funcVec::iterator it = funcs.begin(); it != funcs.end(); ++it) {
running = (*(*it))(running); // not sure of the braces and asterisks here
}
return running;
}
This is intended to run through all the functions in a vector of such and return the resulting value. Then you can:
funcVec funcs;
funcs.pushback(&pipe1);
funcs.pushback(&pipe2);
funcs.pushback(&sink);
for(int i=0; i < SOME_MAX; ++i)
{
cout << runPipe(funcs, i) << endl;
}
Of course you could also construct a wrapper for that via a struct (I would use a closure if C++ did them...):
struct pipeWork {
funcVec funcs;
int run(int i);
};
int pipeWork::run(int i) {
//... guts as runPipe, or keep it separate and call:
return runPipe(funcs, i);
}
// later...
pipeWork kitchen;
kitchen.funcs = someFuncs;
int (*foo) = &kitchen.run();
cout << foo(5) << endl;
Or something like that. Caveat: No idea what this will do if the pointers are passed between threads.
Extra caveat: If you want to do this with varying function interfaces, you will end up having to have a load of void *(void *)(void *) functions so that they can take whatever and emit whatever, or lots of templating to fix the kind of pipe you have. I suppose ideally you'd construct different kinds of pipe for different interfaces between functions, so that a | b | c works even when they are passing different types between them. But I'm going to guess that that's largely what the Boost stuff is doing.
Depending on the simplicity of the functions :
#define pipe1(x) 2*x
#define pipe2(x) pipe1(x)+1
#define sink(x) pipe2(x)*3
int j = 1
while( ++j > 0 )
{
std::cout << sink(j) << std::endl;
}
I've read through stack overflow threads multiple times in the past, and they're often quite helpful. However, I've run into a problem that simply doesn't make sense to me, and I'm trying to figure out what I missed. Here's the sections of the code that I'm having trouble with:
class BigInts
{
public:
static const std::size_t MAXLEN = 100;
BigInts(signed int i); //constructor
BigInts(std::string &); //other constructor
std::size_t size() const;
digit_type operator[](std::size_t ) const;
private:
digit_type _data[MAXLEN];
bool _negative;
int _significant;
};
//nonmember functions
std::ostream & operator << (std::ostream &, const BigInts &);
BigInts::BigInts(signed int i)
{
_negative = (i < 0);
if (i < 0)
{
i = -1*i;
}
std::fill(_data, _data+MAXLEN, 0);
if (i != 0)
{
int d(0);
int c(0);
do
{
_data[d++] = ( i % 10);
i = i / 10;
c++; //digit counter
}while(i > 0);
//_significant = c; //The problem line
assert(c <= MAXLEN); //checks if int got too big
}
}
std::size_t BigInts::size() const
{
std::size_t pos(MAXLEN-1);
while (pos > 0 && _data[pos] == 0)
--pos;
return pos+1;
}
std::ostream & operator << (std::ostream & os, const BigInts & b)
{
for (int i = (b.size() - 1); i >= 0; --i)
os << b[i];
return os;
}
int main()
{
signed int a, b;
std::cout << "enter first number" << std::endl;
std::cin >> a;
std::cout << "enter second number" << std::endl;
std::cin >> b;
BigInts d(a), e(b), f(b);
std::cout << d << " " << e << " " << f;
Major edit, switched from an attempted dummy version of the code to the actual code I'm using, complete with the original variable names. I tried to remove anything that isn't relevant to the code I'm currently working with, but if you see a strange name or call in there, let me know and I can post the associated portion.
The code had been working fine prior to the introduction of _significant, which is a variable I had added to add some more functionality to the class as a whole. However, when I attempted to drive the basic parts of it using the main function you see displayed, it encountered large errors. For example, I inputted 200 and 100 for a and b respectively, it outputted 201, 1, and 3 for d, e, and f. As it currently stands, the ONLY place _significant appears is when I'm attempting to assign the value of c to it.
The only error I can see right now is that _significant isn't initialized when the input is zero.
Step through it in a debugger, make sure the the right digits are ending up in the array and that the array data isn't being overwritten unexpectedly.
EDIT: It works for me (cleaned up slightly). More cleaned up, also working: http://ideone.com/MDQF8
If your class is busted purely by assigning to a member variable, that means stack corruption without a doubt. Whilst I can't see the source offhand, you should replace all buffers with self-length-checking classes to verify accesses.
The line i - 1; in the original code looks highly suspicious. Did you want to write i -= 1; or --i; or something else?
It decrements i by 1 and then throws away the result.