I need to generate all combinations without repetitions from an array, I read some about it, that suggest to use recursion
I have an array
arr = [["A"], ["B"], ["C"], ["D"], ["E"], ["F"]]
I read that I can solved this problem using recursion like
function combinations(arr, n, k)
//do something
//then
return combinations(arr, n, k)
In my case [A, B, C, D] is equivalent to [A, B, D, C].
I found this example in C++
http://www.martinbroadhurst.com/combinations.html
But I couldn't reproduce it.
Any suggestion how can I solve this?
PD: I'm using Python, but I more interested in the algorithm than the language.
For any combinatorics problem, the best way to program it is to figure out the recurrence relation for the counting argument. In the case of combinations, the recurrence relation is simply C(n, k) = C(n - 1, k - 1) + C(n - 1, k).
But what does this mean exactly? Notice, that C(n - 1, k - 1) means that we have taken the first element of the array, and need k - 1 more elements from the other n - 1 elements. Similarly, C(n - 1, k) means that we won't choose the first element of our array as one of the k elements. But remember that if k is 0, then C(n, k) = 1, else if n is 0 then C(n, k) = 0. In our problem, k == 0 would return a set containing the empty set, else if n == 0, we would return the empty set. With this is mind, the code structure would look like this:
def combinations(arr, k):
if k == 0:
return [[]]
elif len(arr) == 0:
return []
result = []
chosen = combinations(arr[1:], k - 1) #we choose the first element of arr as one of the k elements we need
notChosen = combinations(arr[1:], k) #first element not chosen in set of k elements
for combination in chosen:
result.append([arr[0]] + combination)
for combination in notChosen:
result.append(combination)
return result
Now, this function can be optimized by performing memoization (but that can be left as an exercise to you, the reader). As an additional exercise, can you sketch out how the permutation function would look like starting from its counting relation?
Hint:
P(n, k) = C(n, k)k! = [C(n - 1, k - 1) + C(n - 1, k)]k! = P(n - 1, k - 1)k + P(n - 1, k)
[Heck... by the time I posted the answer, the C++ tag went away]
[Edited with more examples, including using char]
Comments in the code:
#include <vector>
// Function that recursively does the actual job
template <typename T, typename Function> void doCombinations(
size_t num, const std::vector<T>& values,
size_t start, std::vector<T>& combinationSoFar,
Function action
) {
if(0==num) { // the entire combination is complete
action(combinationSoFar);
}
else {
// walk through with the current position to the right,
// taking care to let enough walking room for the rest of the elements
for(size_t i=start; i<values.size()+1-num; i++) {
// push the current value there
combinationSoFar.push_back(values[i]);
// recursive call with one less element to enter combination
// and one position to the right for the next element to consider
doCombinations(num-1, values, i+1, combinationSoFar, action);
// pop the current value, we are going to move it to the right anyway
combinationSoFar.pop_back();
}
}
}
// function for the user to call. Prepares everything needed for the
// doCombinations
template <typename T, typename Function>
void for_each_combination(
size_t numInCombination,
const std::vector<T>& values,
Function action
) {
std::vector<T> combination;
doCombinations(numInCombination, values, 0, combination, action);
}
// dummy do-something with the vector
template <typename T> void cout_vector(const std::vector<T>& v) {
std::cout << '[';
for(size_t i=0; i<v.size(); i++) {
if(i) {
std::cout << ",";
}
std::cout << v[i];
}
std::cout << ']' << std::endl;
}
// Assumes the T type supports both addition and ostream <<
template <typename T> void adder(const std::vector<T>& vals) {
T sum=static_cast<T>(0);
for(T v : vals) {
sum+=v;
}
std::cout << "Sum: " << sum << " for ";
cout_vector(vals);
}
int main() {
std::cout << "Char combinations" << std::endl;
std::vector<char> char_vals{'A', 'B', 'C', 'D', 'E'};
for_each_combination(3, char_vals, cout_vector<char>);
std::cout << "\nInt combinations" << std::endl;
std::vector<int> int_vals{0, 1, 2, 3, 4};
for_each_combination(3, int_vals, cout_vector<int>);
std::cout <<"\nFloat combination adder" << std::endl;
std::vector<float> float_vals{0.0, 1.1, 2.2, 3.3, 4.4};
for_each_combination(3, float_vals, adder<float>);
return 0;
}
Output:
Char combinations
[A,B,C]
[A,B,D]
[A,B,E]
[A,C,D]
[A,C,E]
[A,D,E]
[B,C,D]
[B,C,E]
[B,D,E]
[C,D,E]
Int combinations
[0,1,2]
[0,1,3]
[0,1,4]
[0,2,3]
[0,2,4]
[0,3,4]
[1,2,3]
[1,2,4]
[1,3,4]
[2,3,4]
Float combination adder
Sum: 3.3 for [0,1.1,2.2]
Sum: 4.4 for [0,1.1,3.3]
Sum: 5.5 for [0,1.1,4.4]
Sum: 5.5 for [0,2.2,3.3]
Sum: 6.6 for [0,2.2,4.4]
Sum: 7.7 for [0,3.3,4.4]
Sum: 6.6 for [1.1,2.2,3.3]
Sum: 7.7 for [1.1,2.2,4.4]
Sum: 8.8 for [1.1,3.3,4.4]
Sum: 9.9 for [2.2,3.3,4.4]
Related
In Python, accessing a subset of a multidimensional numpy is normally done using the slicing sintax [bx:ex] for a 1D array, [bx:ex,by:ey] for a 2D array and so on and so forth. It is also possible to write a code which is generic such as
def foo(Vin,Vout,lows,highs):
# Vin and Vout are numpys with dimension len(lows)
# and len(lows)=len(highs)
S=tuple(slice(l,h) for l,h in zip(lows,highs))
Vout[S]=Vin[S]
I would like to achieve something similar in C++, where the data is stored in a std::vector and having the same performance (or better) of a bunch of nested for-loops which for a 3D array would look like
for (int k=lz; k<hz; ++k)
for (int j=ly; j<hy; ++j)
for (int i=lx; i<hx; ++i)
Vout[i+nx*(j+ny*k)=Vin[i+nx*(j+ny*k)];
Could this be done using C++20 ranges?
The long term goal is to generate lazily evaluated views of subsets of multidimensional arrays that can be combined together. In other words, being able to fuse loops without creating intermediate arrays.
I am not sure about the performance, but here is one option.
You create a templated struct MD<N,M,L> that takes array dimensions N,M,L and has a static function slice.
slice takes a flat input range and one Slice instance per dimension and returns a corresponding multidimensional range over the elements of the flat input range.
The Slice instances are just structs containing a start index and an optional end index.
You can use deep_flatten from this SO answer to prevent having to use nested for loops over the multidimensional range. Note that the returned range is just an input_range, which does not have a rich interface.
#include <vector>
#include <ranges>
#include <cassert>
#include <iostream>
template <size_t dim>
struct Slice {
// default constructor leaves start at zero and end empty. Correspondes to the whole dimension
constexpr Slice() = default;
// Create a slice with a single index
constexpr Slice(size_t i) : begin(i), end(i+1) {
assert( (0 <= i) && (i < dim));
}
// Create a slice with a start and an end index
constexpr Slice(size_t s, size_t e) : begin(s), end(e+1) {
assert( (0 <= s) && (s <= e) && (e < dim) );
}
size_t begin {0};
size_t end {dim};
};
// An adaptor object to interpret a flat range as a multidimensional array
template <size_t dim, size_t... dims>
struct MD {
constexpr static auto dimensions = std::make_tuple(dim, dims...);
consteval static size_t size(){
if constexpr (sizeof...(dims) > 0) {
return dim*(dims * ...);
}
else {
return dim;
}
}
// returns a multidimensional range over the elements in the flat array
template <typename Rng>
constexpr static auto slice(
Rng&& range,
Slice<dim> const& slice,
Slice<dims> const&... slices
)
{
return slice_impl(range, 0, slice, slices...);
}
template <typename Rng>
constexpr static auto slice_impl(
Rng&& range,
size_t flat_index,
Slice<dim> const& slice,
Slice<dims> const&... slices
)
{
if constexpr (std::ranges::sized_range<Rng>) { assert(std::size(range) >= size()); }
static_assert(sizeof...(slices) == sizeof...(dims), "wrong number of slice arguments.");
if constexpr (sizeof...(slices) == 0)
{
// end recursion at inner most range
return range | std::views::drop(flat_index*dim + slice.begin) | std::views::take(slice.end - slice.begin);
}
else
{
// for every index to be kept in this dimension, recurse to the next dimension and increment the flat_index
return std::views::iota(slice.begin, slice.end) | std::views::transform(
[&range, flat_index, slices...](size_t i){
return MD<dims...>::slice_impl(range, flat_index*dim + i, slices...);
}
);
}
}
// convenience function for the full view
template <typename Rng>
constexpr static auto as_range(Rng&& range){
return slice(range, Slice<dim>{}, Slice<dims>{}...);
}
};
// recursively join a range of ranges
// https://stackoverflow.com/questions/63249315/use-of-auto-before-deduction-of-auto-with-recursive-concept-based-fun
template <typename Rng>
auto flat(Rng&& rng) {
using namespace std::ranges;
auto joined = rng | views::join;
if constexpr (range<range_value_t<decltype(joined)>>) {
return flat(joined);
} else {
return joined;
}
}
int main()
{
static_assert(MD<2,3,2>::size() == 12);
static_assert(std::get<0>(MD<2,3,2>::dimensions) == 2);
static_assert(std::get<1>(MD<2,3,2>::dimensions) == 3);
static_assert(std::get<0>(MD<2,3,2>::dimensions) == 2);
std::vector v = {1,2,3,4,5,6,7,8,9,10,11,12};
// obtain the full view of the data, interpreted as a 2x3x2 array
auto full = MD<2,3,2>::as_range(v);
// print the full view
std::cout << "data interpreted as 2x3x2 array:\n";
for (size_t i=0; i < full.size(); i++) {
std::cout << "index " << i << ":\n";
for (auto const& d3 : full[i]) {
for (auto const& val : d3) {
std::cout << val << " ";
}
std::cout << "\n";
}
}
std::cout << "\n";
auto sliced = MD<2,3,2>::slice(
v,
{}, // 1st dim: take all elements along this dim
{1,2}, // 2nd dim: take indices 1:2
{0} // 3rd dim: take only index 0
);
std::cout << "2x2x1 Slice with indices {{}, {1,2}, {0}} of the 2x3x2 data:\n";
for(size_t i=0; i < 2; ++i){ // index-based loop
for (size_t j=0; j < 2; ++j){
std::cout << sliced[i][j][0] << " ";
}
std::cout << "\n";
}
std::cout << "\n";
for(auto& val : flat(sliced)){
val *= val;
}
// print the whole flat data
std::cout << "\nThe whole data, after squaring all elements in sliced view:\n";
for (auto const& val : v){
std::cout << val << " ";
}
}
Output:
data interpreted as 2x3x2 array:
index 0:
1 2
3 4
5 6
index 1:
7 8
9 10
11 12
2x2x1 Slice with indices {{}, {1,2}, {0}} of the 2x3x2 data:
3 5
9 11
The whole data, after squaring all elements in sliced view:
1 2 9 4 25 6 7 8 81 10 121 12
Live Demo on godbolt compiler explorer
This is a prototype. I am sure the ergonomics can be improved.
Edit
A first quick and dirty benchmark of assigning a 6x6x6 view with another 6x6x6 view out of a 10x10x10:
Quickbench
A nested for loop over the multidimensional range is about 3 times slower than the traditional nested for-loop. Flattening the view using deep_flatten/std::views::join seems to make it 20-30 times slower. Apparently the compiler is having a hard time optimizing here.
I have this problem: Given a vector with n numbers, sort the numbers so that the even ones will be on odd positions and the odd numbers will be on even positions. E.g. If I have the vector 2 6 7 8 9 3 5 1, the output should be 2 7 6 9 8 3 5 1 . The count should start from 1. So on position 1 which is actually index 0 should be an even number, on position 2 which is actually index 1 should be an odd number and so on. Now this is easy if the odd and even numbers are the same, let's say 4 even number and 4 odd numbers in the vector, but what if the number of odd numbers differs from the number of even numbers like in the above example? How do I solve that. I attached the code with one of the tries I did, but it doesn't work. Can I get some help please. I ask you to keep it simple that means only with vectors and such. No weird methods or anything cause I'm a beginner and I only know the basics. Thanks in advance!
I have to mention that n initial is globally declared and is the number of vector elements and v_initial is the initial vector with the elements that need to be rearranged.
The task says to add the remaining numbers to the end of the vector. Like if there are 3 odd and 5 even numbers, The 2 extra even numbers should be thrown at the end of the vector
void vector_pozitii_pare_impare(int v_initial[])
{
int v_pozitie[50],c1=0,c2=1;
for (i = 0; i < n_initial; i++)
{
if (v_initial[i] % 2 == 0)
{
bool isTrue = 1;
for (int k = i + 1; k < n_initial; k++)
{
if (v_initial[k] % 2 != 0)
isTrue = 0;
}
if (isTrue)
{
v_pozitie[c1] = v_initial[i];
c1++;
}
else
{
v_pozitie[c1] = v_initial[i];
c1 += 2;
}
}
else
{
bool isTrue = 1;
for (int j = i + 1; j < n_initial; j++)
{
if (v_initial[j] % 2 == 0)
{
isTrue = 0;
}
if (isTrue)
{
v_pozitie[c2] = v_initial[i];
c2++;
}
else
{
v_pozitie[c2] = v_initial[i];
c2 += 2;
}
}
}
}
This may not be a perfect solution and it just popped out right off my mind without being tested or verified, but it's just to give you an idea.
(Let A,B,C,D be odd numbers and 0,1,2 even numbers correspondingly)
Given:
A 0 B C D 1 2 (random ordered list of odd/even numbers)
Wanted:
A 0 B 1 C 2 D (input sequence altered to match the wanted odd/even criteria)
Next, we invent the steps required to get from given to wanted:
// look at 'A' -> match, next
// Result: A 0 B C D 1 2
// look at '0' -> match, next
// Result: A 0 B C D 1 2
// look at 'B' -> match, next
// Result: A 0 B C D 1 2
// look at 'C' -> mismatch, remember index and find first match starting from index+1
// Result: A 0 B C D ->1<- 2
// now swap the numbers found at the remembered index and the found one.
// Result: A 0 B 1 D C 2
// continue until the whole list has been consumed.
As I said, this algorithm may not be perfect, but my intention is to give you an example on how to solve these kinds of problems. It's not good to always think in code first, especially not with a problem like this. So you should first think about where you start, what you want to achieve and then carefully think of how to get there step by step.
I feel I have to mention that I did not provide an example in real code, because once you got the idea, the execution should be pretty much straight forward.
Oh, and just a small remark: Almost nothing about your code is C++.
A simple solution, that is not very efficient would be to split the vector into 2 vectors, that contain even and uneven numbers and then always take one from the even, one from the uneven and then the remainder, from the one that is not completely entered.
some c++ (that actually uses vectors, but you can use an array the same way, but need to change the pointer arithmetic)
I did not test it, but the principle should be clear; it is not very efficient though
EDIT: The answer below by #AAAAAAAAARGH outlines a better algorithmic idea, that is inplace and more efficient.
void change_vector_even_uneven(std::vector<unsigned>& in_vec){
std::vector<unsigned> even;
std::vector<unsigned> uneven;
for (auto it = in_vec.begin(); it != in_vec.end(); it++){
if ((*it) % 2 == 0)) even.push_back(*it);
else uneven.push_back(*it);
}
auto even_it = even.begin();
auto uneven_it = uneven.begin();
for (auto it = in_vec.begin(); it != in_vec.end(); it++){
if (even_it == even.end()){
(*it) = (*uneven_it);
uneven_it++;
continue;
}
if (uneven_it == uneven.end()){
(*it) = (*even_it);
even_it++;
continue;
}
if ((it - in_vec.begin()) % 2 == 0){
(*it) = (*even_it);
even_it++;
}
else{
(*it) = (*uneven_it);
uneven_it++;
}
}
}
The solutions is simple. We sort the even and odd values into a data structure. In a loop, we iterate over all source values. If they are even (val & 2 == 0) we add them at the end of a std::deque for evens and if odd, we add them to a std::deque for odds.
Later, we we will extract the the values from the front of the std::deque.
So, we have a first in first out principle.
The std::deque is optimized for such purposes.
Later, we make a loop with an alternating branch in it. We, alternatively extract data from the even queue and then from the odd queue. If a queue is empty, we do not extract data.
We do not need an additional std::vector and can reuse the old one.
With that, we do not need to take care for the same number of evens and odds. It will of course always work.
Please see below one of millions of possible solutions:
#include <iostream>
#include <vector>
#include <deque>
int main() {
std::vector testData{ 2, 6, 7, 8, 9, 3, 5, 1 };
// Show initial data
std::cout << "\nInitial data: ";
for (const int i : testData) std::cout << i << ' ';
std::cout << '\n';
// We will use a deques to store odd and even numbers
// With that we can efficiently push back and pop front
std::deque<int> evenNumbers{};
std::deque<int> oddNumbers{};
// Sort the original data into the specific container
for (const int number : testData)
if (number % 2 == 0)
evenNumbers.push_back(number);
else
oddNumbers.push_back(number);
// Take alternating the data from the even and the odd values
bool takeEven{ true };
for (size_t i{}; !evenNumbers.empty() && !oddNumbers.empty(); ) {
if (takeEven) { // Take even numbers
if (not evenNumbers.empty()) { // As long as there are even values
testData[i] = evenNumbers.front(); // Get the value from the front
evenNumbers.pop_front(); // Remove first value
++i;
}
}
else { // Now we take odd numbers
if (not oddNumbers.empty()) { // As long as there are odd values
testData[i] = oddNumbers.front(); // Get the value from the front
oddNumbers.pop_front(); // Remove first value
++i;
}
}
// Next take the other container
takeEven = not takeEven;
}
// Show result
std::cout << "\nResult: ";
for (const int i : testData) std::cout << i << ' ';
std::cout << '\n';
return 0;
}
Here is yet another solution (using STL), in case you want a stable result (that is, the order of your values is preserved).
#include <algorithm>
#include <vector>
auto ints = std::vector<int>{ 2, 6, 7, 8, 9, 3, 5, 1 };
// split list to even/odd sections -> [2, 6, 8, 7, 9, 3, 5, 1]
const auto it = std::stable_partition(
ints.begin(), ints.end(), [](auto value) { return value % 2 == 0; });
auto results = std::vector<int>{};
results.reserve(ints.size());
// merge both parts with equal size
auto a = ints.begin(), b = it;
while (a != it && b != ints.end()) {
results.push_back(*a++);
results.push_back(*b++);
}
// copy remaining values to end of list
std::copy(a, it, std::back_inserter(results));
std::copy(b, ints.end(), std::back_inserter(results));
The result ist [2, 7, 6, 9, 8, 3, 5, 1]. The complexity is O(n).
This answer, like some of the others, divides the data and then reassembles the result. The standard library std::partition_copy is used to separate the even and odd numbers into two containers. Then the interleave function assembles the result by alternately copying from two input ranges.
#include <algorithm>
#include <iostream>
#include <vector>
template <typename InIt1, typename InIt2, typename OutIt>
OutIt interleave(InIt1 first1, InIt1 last1, InIt2 first2, InIt2 last2, OutIt dest)
{
for (;;) {
if (first1 == last1) {
return std::copy(first2, last2, dest);
}
*dest++ = *first1++;
if (first2 == last2) {
return std::copy(first1, last1, dest);
}
*dest++ = *first2++;
}
}
void reorder_even_odd(std::vector<int> &data)
{
auto is_even = [](int value) { return (value & 1) == 0; };
// split
std::vector<int> even, odd;
std::partition_copy(begin(data), end(data), back_inserter(even), back_inserter(odd), is_even);
// merge
interleave(begin(even), end(even), begin(odd), end(odd), begin(data));
}
int main()
{
std::vector<int> data{ 2, 6, 7, 8, 9, 3, 5, 1 };
reorder_even_odd(data);
for (int value : data) {
std::cout << value << ' ';
}
std::cout << '\n';
}
Demo on Compiler Explorer
As suggested, I am using vectors and STL.
No need to be a great mathematician to understand v_pozitie will start with pairs of odd and even and terminate with the integers not in the initial pairs.
I am then updating three iterators in v_positie (no need of temporary containers to calculate the result) : even, odd and end,(avoiding push_back) and would code this way :
#include <vector>
#include <algorithm>
void vector_pozitii_pare_impare(std::vector<int>& v_initial, std::vector<int>& v_pozitie) {
int nodd (0), neven (0);
std::for_each (v_initial.begin (), v_initial.end (), [&nodd] (const int& n) {
nodd += n%2;
});
neven = v_initial.size () - nodd;
int npair (neven < nodd ?neven:nodd);
npair *=2;
std::vector<int>::iterator iend (&v_pozitie [npair]), ieven (v_pozitie.begin ()), iodd (&v_pozitie [1]);
std::for_each (v_initial.begin (), v_initial.end (), [&iend, &ieven, &iodd, &npair] (const int& s) {
if (npair) {
switch (s%2) {
case 0 :
*ieven++ = s;
++ieven;
break;
case 1 :
*iodd++ = s;
++iodd;
break;
}
--npair;
}
else *iend++ = s;
});
}
int main (int argc, char* argv []) {
const int N = 8;
int tab [N] = {2, 6, 7, 8, 9, 3, 5, 1};
std::vector<int> v_initial (tab, (int*)&tab [N]);
std::cout << "\tv_initial == ";
std::for_each (v_initial.begin (), v_initial.end (), [] (const int& s) {std::cout << s << " ";});
std::cout << std::endl;
std::vector<int> v_pozitie (v_initial.size (), -1);
vector_pozitii_pare_impare (v_initial, v_pozitie);
std::cout << "\tv_pozitie == ";
std::for_each (v_pozitie.begin (), v_pozitie.end (), [] (const int& s) {std::cout << s << " ";});
std::cout << std::endl;
}
I want to know if this backtracking algorithm actually works.
In the text book Foundations of Algorithms, 5th edition, it is defined as follows:
Algorithm 5.4: The Backtracking Algorithm for the Sum-of-Subsets Problem
Problem: Given n positive integers (weights) and a positive integer W,
determine all combinations of the integers that sum up to W.
Inputs: positvie integer n, sorted (nondecreasing order) array of
positive integers w indexed from 1 to n, and a positive integer
W.
Outputs: all combinations of the integers that sum to W.
void sum_of_subsets(index i,
int weight, int total) {
if (promising(i))
if (weight == W)
cout << include[1] through include [i];
else {
include[i + 1] = "yes"; // Include w[i + 1].
sum_of_subsets(i + 1, weight + w[i + 1], total - w[i + 1]);
include[i + 1] = "no"; // Do not include w[i + 1].
sum_of_subsets(i + 1, weight, total - w[i + 1]);
}
}
bool promising (index i); {
return (weight + total >= W) && (weight == W || weight + w[i + 1] <= W);
}
Following our usual convention, n, w, W, and include are not
inputs to our routines. If these variables were defined globally, the
top-level call to sum_of_subsets would be as follows:
sum_of_subsets(0, 0, total);
At the end of chapter 5, exercise 13 asks:
Use the Backtracking algorithm for the Sum-of-Subsets problem (Algorithm 5.4)
to find all combinations of the following numbers that sum to W = 52:
w1 = 2 w2 = 10 w3 = 13 w4 = 17 w5 = 22 w6 = 42
I've implemented this exact algorithm, accounting for arrays that start at 1 and it just does not work...
void sos(int i, int weight, int total) {
int yes = 1;
int no = 0;
if (promising(i, weight, total)) {
if (weight == W) {
for (int j = 0; j < arraySize; j++) {
std::cout << include[j] << " ";
}
std::cout << "\n";
}
else if(i < arraySize) {
include[i+1] = yes;
sos(i + 1, weight + w[i+1], total - w[i+1]);
include[i+1] = no;
sos(i + 1, weight, total - w[i+1]);
}
}
}
int promising(int i, int weight, int total) {
return (weight + total >= W) && (weight == W || weight + w[i+1] <= W);
}
I believe the problem is here:
sos(i + 1, weight, total - w[i+1]);
sum_of_subsets(i+1, weight, total-w[i+1]);
When you reach this line you are not backtracking correctly.
Is anyone able to identify a problem with this algorithm or actually code it to work?
I personally find the algorithm problematic. There is no bounds checking, it uses a lot of globals, and it assumes an array is indexed from 1. I don't think you can copy it verbatim. It's pseudocode for the actual implementation. In C++ arrays always start from 0. So you're likely to have problems when you try do include[i+1] and you are only checking i < arraySize.
The algorithm also assumes you have a global variable called total, which is used by the function promising.
I have reworked the code a bit, putting it inside a class, and simplified it somewhat:
class Solution
{
private:
vector<int> w;
vector<int> include;
public:
Solution(vector<int> weights) : w(std::move(weights)),
include(w.size(), 0) {}
void sos(int i, int weight, int total) {
int yes = 1;
int no = 0;
int arraySize = include.size();
if (weight == total) {
for (int j = 0; j < arraySize; j++) {
if (include[j]) {
std::cout << w[j] << " ";
}
}
std::cout << "\n";
}
else if (i < arraySize)
{
include[i] = yes;
//Include this weight
sos(i + 1, weight + w[i], total);
include[i] = no;
//Exclude this weight
sos(i + 1, weight, total);
}
}
};
int main()
{
Solution solution({ 2, 10, 13, 17, 22, 42 });
solution.sos(0, 0, 52);
//prints: 10 42
// 13 17 22
}
So yes, as others pointed out, you stumbled over the 1 based array index.
That aside, I think you should ask the author for a partial return of the money you paid for the book, because the logic of his code is overly complicated.
One good way not to run into bounds problems is to not use C++ (expecting hail of downvotes for this lol).
There are only 3 cases to test for:
The candidate value is greater than what is remaining. (busted)
The candidate value is exactly what is remaining.
The candidate value is less than what is remaining.
The promising function tries to express that and then the result of that function is re-tested again in the main function sos.
But it could look as simple as this:
search :: [Int] -> Int -> [Int] -> [[Int]]
search (x1:xs) t path
| x1 > t = []
| x1 == t = [x1 : path]
| x1 < t = search xs (t-x1) (x1 : path) ++ search xs t path
search [] 0 path = [path]
search [] _ _ = []
items = [2, 10, 13, 17, 22, 42] :: [Int]
target = 52 :: Int
search items target []
-- [[42,10],[22,17,13]]
Now, it is by no means impossible to achieve a similar safety net while writing C++ code. But it takes determination and a conscious decision on what you are willing to cope with and what not. And you need to be willing to type a few more lines to accomplish what the 10 lines of Haskell do.
First off, I was bothered by all the complexity of indexing and range checking in the original C++ code. If we look at our Haskell code (which works with lists),
it is confirmed that we do not need random access at all. We only ever look at the start of the remaining items. And we append a value to the path (in Haskell we append to the front because speed) and eventually we append a found combination to the result set. With that in mind, bothering with indices is kind of over the top.
Secondly, I rather like the way the search function looks - showing the 3 crucial tests without any noise surrounding them. My C++ version should strive to be as pretty.
Also, global variables are so 1980 - we won't have that. And tucking those "globals" into a class to hide them a bit is so 1995. We won't have that either.
And here it is! The "safer" C++ implementation. And prettier... um... well some of you might disagree ;)
#include <cstdint>
#include <vector>
#include <iostream>
using Items_t = std::vector<int32_t>;
using Result_t = std::vector<Items_t>;
// The C++ way of saying: deriving(Show)
template <class T>
std::ostream& operator <<(std::ostream& os, const std::vector<T>& value)
{
bool first = true;
os << "[";
for( const auto item : value)
{
if(first)
{
os << item;
first = false;
}
else
{
os << "," << item;
}
}
os << "]";
return os;
}
// So we can do easy switch statement instead of chain of ifs.
enum class Comp : int8_t
{ LT = -1
, EQ = 0
, GT = 1
};
static inline
auto compI32( int32_t left, int32_t right ) -> Comp
{
if(left == right) return Comp::EQ;
if(left < right) return Comp::LT;
return Comp::GT;
}
// So we can avoid index insanity and out of bounds problems.
template <class T>
struct VecRange
{
using Iter_t = typename std::vector<T>::const_iterator;
Iter_t current;
Iter_t end;
VecRange(const std::vector<T>& v)
: current{v.cbegin()}
, end{v.cend()}
{}
VecRange(Iter_t cur, Iter_t fin)
: current{cur}
, end{fin}
{}
static bool exhausted (const VecRange<T>&);
static VecRange<T> next(const VecRange<T>&);
};
template <class T>
bool VecRange<T>::exhausted(const VecRange<T>& range)
{
return range.current == range.end;
}
template <class T>
VecRange<T> VecRange<T>::next(const VecRange<T>& range)
{
if(range.current != range.end)
return VecRange<T>( range.current + 1, range.end );
return range;
}
using ItemsRange = VecRange<Items_t::value_type>;
static void search( const ItemsRange items, int32_t target, Items_t path, Result_t& result)
{
if(ItemsRange::exhausted(items))
{
if(0 == target)
{
result.push_back(path);
}
return;
}
switch(compI32(*items.current,target))
{
case Comp::GT:
return;
case Comp::EQ:
{
path.push_back(*items.current);
result.push_back(path);
}
return;
case Comp::LT:
{
auto path1 = path; // hope this makes a real copy...
path1.push_back(*items.current);
search(ItemsRange::next(items), target - *items.current, path1, result);
search(ItemsRange::next(items), target, path, result);
}
return;
}
}
int main(int argc, const char* argv[])
{
Items_t items{ 2, 10, 13, 17, 22, 42 };
Result_t result;
int32_t target = 52;
std::cout << "Input: " << items << std::endl;
std::cout << "Target: " << target << std::endl;
search(ItemsRange{items}, target, Items_t{}, result);
std::cout << "Output: " << result << std::endl;
return 0;
}
The code implements the algorithm correctly, except that you did not apply the one-based array logic in your output loop. Change:
for (int j = 0; j < arraySize; j++) {
std::cout << include[j] << " ";
}
to:
for (int j = 0; j < arraySize; j++) {
std::cout << include[j+1] << " ";
}
Depending on how you organised your code, make sure that promising is defined when sos is defined.
See it run on repl.it. Output:
0 1 0 0 0 1
0 0 1 1 1 0
The algorithm works fine: the second and third argument to the sos function act as a window in which the running sum should stay, and the promising function verifies against this window. Any value outside this window will be either to small (even if all remaining values were added to it, it will still be less than the target value), or too great (already overrunning the target). These two constraints are explained in the beginning of chapter 5.4 in the book.
At each index there are two possible choices: either include the value in the sum, or don't. The value at includes[i+1] represents this choice, and both are attempted. When there is a match deep down such recursing attempt, all these choices (0 or 1) will be output. Otherwise they are just ignored and switched to the opposite choice in a second attempt.
I would like to build a C++ program that show all the possible combinations depending on the number of elements taken by a N factor.
Let's suppose a vector vec[6] with elements 1 2 3 4 5 6 on it.
Using the combination formula, 6! / 4! (6 - 4)! = 15 possibilities
I want to generate a function which gives the result of all 15 possibilities taken 4 by 4 with no repetition, as example:
1 2 3 4
1 2 3 5
1 2 3 6
2 3 4 5
and so on...
I am using this code for now, but i want to use numbers from my vector (v[6]).
#include <algorithm>
#include <iostream>
#include <string>
void comb(int N, int K)
{
std::string bitmask(K, 1); // K leading 1's
bitmask.resize(N, 0); // N-K trailing 0's
// print integers and permute bitmask
do {
for (int i = 0; i < N; ++i) // [0..N-1] integers
{
if (bitmask[i]) std::cout << " " << i;
}
std::cout << std::endl;
} while (std::prev_permutation(bitmask.begin(), bitmask.end()));
}
int main()
{
comb(6, 4);
}
Would you please guys give me a help? I'd like to know where i could change the code so that i can use my own vector.
i'm generating this vector v[i] and sorting it with a bubble sort, like this:
void order (int d[], int n){
int i, j;
for (i = 1; i < n; i++)
for (j = 0; j < n-1; j++)
if (d[j] > d[j+1])
swap (d[j],d[j+1]);
for (i = 0; i < n; i++)
cout << d[i] << " ";
}
after that sorting, i want to put my vector into the comb function.
how could i make it is possible?
Here's a C++14 solution that uses a free, open source library to do the work:
#include "combinations"
#include <iomanip>
#include <iostream>
#include <vector>
int
main()
{
std::vector<int> v{1, 2, 3, 4, 5, 6};
int state = 0;
for_each_combination(v.begin(), v.begin() + 4, v.end(),
[&state](auto first, auto last)
{
std::cout << std::setw(2) << ++state << " : ";
while (true)
{
std::cout << *first;
if (++first == last)
break;
std::cout << ' ';
}
std::cout << '\n';
return false;
});
}
This outputs:
1 : 1 2 3 4
2 : 1 2 3 5
3 : 1 2 3 6
4 : 1 2 4 5
5 : 1 2 4 6
6 : 1 2 5 6
7 : 1 3 4 5
8 : 1 3 4 6
9 : 1 3 5 6
10 : 1 4 5 6
11 : 2 3 4 5
12 : 2 3 4 6
13 : 2 3 5 6
14 : 2 4 5 6
15 : 3 4 5 6
The source code for the library can be copy/pasted from the above link and inspected for how it works. This library is extremely high performance compared to solutions using std::prev_permutation. The implementation is relatively simple for this function, but the library also contains more functionality that grows increasingly complicated to implement (but just as easy to use):
template <class BidirIter, class Function>
Function
for_each_combination(BidirIter first,
BidirIter mid,
BidirIter last,
Function f);
template <class BidirIter, class Function>
Function
for_each_permutation(BidirIter first,
BidirIter mid,
BidirIter last,
Function f);
template <class BidirIter, class Function>
Function
for_each_reversible_permutation(BidirIter first,
BidirIter mid,
BidirIter last,
Function f);
template <class BidirIter, class Function>
Function
for_each_circular_permutation(BidirIter first,
BidirIter mid,
BidirIter last,
Function f);
template <class BidirIter, class Function>
Function
for_each_reversible_circular_permutation(BidirIter first,
BidirIter mid,
BidirIter last,
Function f);
The library has several pleasant features including:
Your input sequence (vector or whatever) need not be sorted.
You can prematurely break out of the loop at any time by returning true.
If you don't break out of the loop early, the sequence is always returned to its original state.
The functor always receives iterators to the first k elements of the sequence, so it is possible to also operate on the elements not selected if you tell the functor about the total length of the sequence.
Feel free to use the library as is, or study and take what you need from its implementation. The link above contains a tutorial-like description, and a detailed specification of each function.
Start with subset S = {1,2,3,...,k}, that's the first subset. Generate the next subset by examining elements from the right (start with the last), and increment it if you can (if it is < N), and return that as the next subset. If you can't increment it, look at the element to the left until you find one you can increment. Increment it, and set the elements to the right sequentially from that point. Below are the 3 element subsets of {1,2,3,4,5} (N=5,k=3,there are 10 subsets):
{1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5}, {1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5}
#include <iostream>
#include <vector>
std::ostream& operator<<(std::ostream& o, std::vector<int>& a)
{
o << "{";
for (std::vector<int>::const_iterator it = a.begin(); it != a.end(); ++it) {
o << *it;
if (it + 1 < a.end()) o << ",";
}
return o << "}";
}
int main()
{
const int N = 7;
const int k = 4;
std::vector<int> A(k);
// initialize
for (int i = 0; i < k; ++i) {
A[i] = i + 1;
}
std::cout << A << std::endl;
int h = 0;
bool done = false;
do {
++A[k-h-1];
for (int t = k - h; t < k; ++t) {
A[t] = A[t-1] + 1;
}
if (A[k-h-1] < N - h) {
// last element can be incremented, stay there...
h = 0;
} else {
// last element at max, look back ...
++h;
}
done = (A[0] == N - k + 1);
std::cout << A << std::endl;
} while (!done);
}
Pretty simple recursive implementation:
struct Combs
{
vector<int> scombs;
template <typename F>
void call_combs(int n, int k, F f)
{
if (k == 0) {
f();
}
else {
scombs.push_back(n - 1);
call_combs(n - 1, k - 1, f);
scombs.resize(scombs.size() - 1);
if (k < n) {
call_combs(n - 1, k, f);
}
}
}
};
...
Combs combs;
const auto& coco = combs.scombs;
combs.call_combs(6, 4, [&coco](){
copy(coco.cbegin(), coco.cend(), ostream_iterator<int>(cout));
cout << endl;
});
Suppose I have a sorted vector of numbers from 0 to 1. I want to know the indices, where values become larger than multiples of 0.1 (i.e. the deciles. in the future maybe also percentiles).
A simple solution I have in mind is using std::lower_bound:
std::vector<float> v;
/// something which fills the vector here
std::sort(v.begin(),v.end());
std::vector<float>::iterator i = v.begin();
for (float k = 0.1 ; k < 0.99 ; k+= 0.1) {
i = std::lower_bound (v.begin(), v.end(), k);
std::cout << "reached " << k << " at position " << (low-v.begin()) << std::endl;
std::cout << " going from " << *(low-1) << " to " << *low << std::endl;
// for simplicity of the example, I don't check if low is the first item of the vector
}
Since the vector can be long, I was wondering if this can be made faster. A first optimisation is to not search the part of the vector below the previous decile:
i = std::lower_bound (i, v.end(), k);
But, assuming lower_bound performs a binary search, this still scans the entire upper part of the vector for each decile over and over again and doesn't use the intermediate results from the previous binary search.
So ideally I would like to use a search function to which I can pass multiple search items, somehow like:
float searchvalues[9];
for (int k = 1; k <= 9 ; ++k) {
searchvalues[k] = ((float)k)/10.;
}
int deciles[9] = FANCY_SEARCH(v.begin(),v.end(),searchvalues,9);
is there anything like this already around and existing in standard, boost, or other libraries?
To be in O(log n), you may use the following:
void fill_range(
std::array<boost::optional<std::pair<std::size_t, std::size_t>>, 10u>& ranges,
const std::vector<float>& v,
std::size_t b,
std::size_t e)
{
if (b == e) {
return;
}
int decile_b = v[b] / 0.1f;
int decile_e = v[e - 1] / 0.1f;
if (decile_b == decile_e) {
auto& range = ranges[decile_b];
if (range) {
range->first = std::min(range->first, b);
range->second = std::max(range->second, e);
} else {
range = std::make_pair(b, e);
}
} else {
std::size_t mid = (b + e + 1) / 2;
fill_range(ranges, v, b, mid);
fill_range(ranges, v, mid, e);
}
}
std::array<boost::optional<std::pair<std::size_t, std::size_t>>, 10u>
decile_ranges(const std::vector<float>& v)
{
// assume sorted `v` with value x: 0 <= x < 1
std::array<boost::optional<std::pair<std::size_t, std::size_t>>, 10u> res;
fill_range(res, v, 0, v.size());
return res;
}
Live Demo
but a linear search seems simpler
auto last = v.begin();
for (int i = 0; i != 10; ++i) {
const auto it = std::find_if(v.begin(), v.end(),
[i](float f) {return f >= (i + 1) * 0.1f;});
// ith decile ranges from `last` to `it`;
last = it;
}
There isn't anything in Boost or the C++ Standard Library. Two choices for an algorithm, bearing in mind that both vectors are sorted:
O(N): trundle through the sorted vector, considering the elements of your quantile vector as you go.
O(Log N * Log M): Start with the middle quantile. Call lower_bound. The result of this becomes the higher iterator in a subsequent lower_bound call on the set of quantiles below that pivot and the lower iterator in a subsequent lower_bound call on the set of quantiles above that pivot. Repeat the process for both halves.
For percentiles, my feeling is that (1) will be the faster choice, and is considerably simpler to implement.