how to solve possible unique combinations problem - c++

I need a combination algorithm for large numbers. I found something on StackOverflow but the implementation was not totally correct. The code below runs wrong if the size of the vector is larger than 22-24 and k is higher.
#include <bits/stdc++.h>
using namespace std;
template<typename T>
void pretty_print(const T& c, int combo)
{
int n = c.size();
for (int i = 0; i < n; ++i) {
if ((combo >> i) & 1)
cout << c[i] << ' ';
}
cout << endl;
}
template<typename T>
void combo(const T& c, int k)
{
int n = c.size();
int combo = (1 << k) - 1; // k bit sets
while (combo < 1<<n) {
pretty_print(c, combo);
int x = combo & -combo;
int y = combo + x;
int z = (combo & ~y);
combo = z / x;
combo >>= 1;
combo |= y;
}
}
int main()
{
vector<char> c0 = {'1', '2', '3', '4', '5'};
combo(c0, 3);
vector<char> c1 = {'a', 'b', 'c', 'd', 'e', 'f', 'g'};
combo(c1, 4);
return 0;
}
This is taken from Creating all possible k combinations of n items in C++
Now, I'm using std::prev_permutation, it works but too slow for my analysis program. There are more than one thousand of combinations in my program. So that I wanted to use the algorithm above. How can ı fix this algorithm to work under all circumstances?
Thank you, in advance.

The reason it would fail is because the original algorithm you were posting was utilizing some bit wise calculation. It is actually consider each bit of a int type as a different number. And depending on the int defined by your compiler, you might be limited to a maximum n of either 32 or 64. You could also change all the int declaration to int64_t to force them to be defined in 64 bit.
However, doing that would still capped your maximum n to 64. (In fact, it is capped to the bit size - 2 or 62 or 30, so not entirely sure why you got stuck around 23) The real solution is to change all the int to std::bitset so that it could store a maximum n of SIZE_MAX.
Code as below:
#include <iostream>
#include <vector>
#include <bitset>
template<size_t bitSize>
void pretty_print(const std::vector<std::bitset<bitSize>>& c, std::bitset<bitSize> combo)
{
int n = c.size();
for (int i = 0; i < n; ++i) {
if (((combo >> i) & std::bitset < bitSize>(1)) != std::bitset <bitSize>(0))
std::cout << c[i].to_ullong() << ' ';
}
std::cout << std::endl;
}
template<size_t bitSize>
bool smallerThan(std::bitset<bitSize> bits1, std::bitset<bitSize> bits2)
{
for (size_t i = bitSize - 1; i > 0; i--)
{
if (bits1[i] < bits2[i])
{
return true;
}
}
return false;
}
template<size_t bitSize>
std::bitset<bitSize> bitAddition(std::bitset<bitSize> bits1, std::bitset<bitSize> bits2)
{
std::bitset<bitSize> carry;
while (bits2 != std::bitset<bitSize>(0))
{
carry = bits1 & bits2;
bits1 = bits1 ^ bits2;
bits2 = carry << 1;
}
return bits1;
}
template<size_t bitSize>
std::bitset<bitSize> bitSubtraction(std::bitset<bitSize> bits1, std::bitset<bitSize> bits2)
{
while (bits2 != std::bitset<bitSize>(0))
{
std::bitset<bitSize> borrow = (~bits1) & bits2;
bits1 = bits1 ^ bits2;
bits2 = borrow << 1;
}
return bits1;
}
template<size_t bitSize>
std::bitset<bitSize> bitSubtractionStopAt0(std::bitset<bitSize> bits1, std::bitset<bitSize> bits2)
{
while (bits2 != std::bitset<bitSize>(0))
{
std::bitset<bitSize> borrow = (~bits1) & bits2;
bits1 = bits1 ^ bits2;
bits2 = borrow << 1;
if (bits1 == std::bitset<bitSize>(0)) return bits1;
}
return bits1;
}
template<size_t bitSize>
std::bitset<bitSize> bitDivision(std::bitset<bitSize> dividend, std::bitset<bitSize> divisor)
{
std::bitset<bitSize> quotient(0);
while (smallerThan(std::bitset<bitSize>(0), dividend))
{
dividend = bitSubtractionStopAt0(dividend, divisor);
quotient = bitAddition(quotient, std::bitset<bitSize>(1));
}
return quotient;
}
template<size_t bitSize>
void combo(const std::vector<std::bitset<bitSize>>& c, int k)
{
auto n = c.size();
std::bitset<bitSize> one(1);
std::bitset<bitSize> combo(bitSubtraction((one << k), std::bitset<bitSize>(1)));
while (smallerThan(combo, (one << n)))
{
pretty_print(c, combo);
auto negCombo = combo;
negCombo.flip();
for (size_t i = 0; i < bitSize; i++)
{
negCombo.flip(i);
if (negCombo[i])
{
break;
}
}
std::bitset<bitSize> x = combo & negCombo;
std::bitset<bitSize> y;
bool tempBit = 0;
for (size_t i = 0; i < bitSize; i++)
{
y[i] = combo[i] ^ x[i];
if (tempBit)
{
if (!y[i])
{
tempBit = 0;
}
y[i] = y[i] ^ 1;
}
if (combo[i] & x[i])
{
tempBit = 1;
}
}
std::bitset<bitSize> z = (combo & ~y);
combo = bitDivision(z, x);
combo >>= 1;
combo |= y;
}
}
int main()
{
const int n = 500;
int k = 2;
std::bitset<(n + 2)> n_bits(n);
std::vector<std::bitset<n + 2>> people;
for (unsigned long i = 1; i < n_bits.to_ullong() + 1; ++i) { people.push_back(i); }
combo(people, k);
return 0;
}
I've only tested 90C4 and 500C2, but this should work for all n smaller than SIZE_MAX. Also I believe there are ways to optimize the bitwise calculations I've used better, not an expert on it.
Another approach is probably use a larger number type, such as int128_t or int1024_t. However, you would also need to overload some bitwise calculation.
Also you mentioned that:
For example there is a vector of {30,30,30,30,30,30,30,30,60,60,60,60,60,60,60,60,90,90,90,90,90,90,90,90}, when you try to find combination of this vector by 8, it does not do it correctly.
The way this algorithm work, it does not check the value of each members. Instead it automatically assume all members are unique. Similar to what a real nCr function would do.
Also some of the other method's mentioned in Creating all possible k combinations of n items in C++ are quite fast as well, especially since I can't find a good way of implementing bitwise calculation yet D: I guess you could have some data structure to separate the number as several bitset<63> object, so you could cast them to unsigned long long to use its bitwise operators. However not sure if that would run faster.
Either way if you are doing nCk ≈ thousands, then the difference for all those methods should be neglectable.

Related

The fastest way to generate a random permutation

I need to permute N numbers between 0 and N-1 in the fastest way (on a CPU, without multi-threading, but maybe with SIMD). N is not large, I think in most cases, N<=12, so N! fits a signed 32-bit integer.
What I have tried so far is roughly the following (some optimizations are omitted, and my original code is in Java, but we speak performance in C++ if not pseudo-code):
#include <random>
#include <cstdint>
#include <iostream>
static inline uint64_t rotl(const uint64_t x, int k) {
return (x << k) | (x >> (64 - k));
}
static uint64_t s[2];
uint64_t Next(void) {
const uint64_t s0 = s[0];
uint64_t s1 = s[1];
const uint64_t result = rotl(s0 + s1, 17) + s0;
s1 ^= s0;
s[0] = rotl(s0, 49) ^ s1 ^ (s1 << 21); // a, b
s[1] = rotl(s1, 28); // c
return result;
}
// Assume the array |dest| must have enough space for N items
void GenPerm(int* dest, const int N) {
for(int i=0; i<N; i++) {
dest[i] = i;
}
uint64_t random = Next();
for(int i=0; i+1<N; i++) {
const int ring = (N-i);
// I hope the compiler optimizes acquisition
// of the quotient and modulo for the same
// dividend and divisor pair into a single
// CPU instruction, at least in Java it does
const int pos = random % ring + i;
random /= ring;
const int t = dest[pos];
dest[pos] = dest[i];
dest[i] = t;
}
}
int main() {
std::random_device rd;
uint32_t* seed = reinterpret_cast<uint32_t*>(s);
for(int i=0; i<4; i++) {
seed[i] = rd();
}
int dest[20];
for(int i=0; i<10; i++) {
GenPerm(dest, 12);
for(int j=0; j<12; j++) {
std::cout << dest[j] << ' ';
}
std::cout << std::endl;
}
return 0;
}
The above is slow because the CPU's modulo operation (%) is slow. I could think of generating one random number between 0 and N!-1 (inclusive); this will reduce the number of modulo operations and Next() calls, but I don't know how to proceed then. Another approach could be to replace the division operation with multiplication by the inverse integer number at the cost of small bias in the modulos generated, but I don't these inverse integers and multiplication will probably not be much faster (bitwise operations & shifts should be faster).
Any more concrete ideas?
UPDATE: I've been asked why it's a bottleneck in the real application. So I just posted a task that may be of interest to the other folks. The real task in production is:
struct Item {
uint8_t is_free_; // 0 or 1
// ... other members ...
};
Item* PickItem(const int time) {
// hash-map lookup, non-empty arrays
std::vector<std::vector<Item*>>> &arrays = GetArrays(time);
Item* busy = nullptr;
for(int i=0; i<arrays.size(); i++) {
uint64_t random = Next();
for(int j=0; j+1<arrays[i].size(); j++) {
const int ring = (arrays[i].size()-j);
const int pos = random % ring + j;
random /= ring;
Item *cur = arrays[i][pos];
if(cur.is_free_) {
// Return a random free item from the first array
// where there is at least one free item
return cur;
}
arrays[i][pos] = arrays[i][j];
arrays[i][j] = cur;
}
Item* cur = arrays[i][arrays[i].size()-1];
if(cur.is_free_) {
return cur;
} else {
// Return the busy item in the last array if no free
// items are found
busy = cur;
}
}
return busy;
}
I came up with the following solution in C++ (though not very portable to Java, because Java doesn't allow parametrizing generics with a constant - in Java I had to use polymorphism, as well as a lot of code duplication):
#include <random>
#include <cstdint>
#include <iostream>
static inline uint64_t rotl(const uint64_t x, int k) {
return (x << k) | (x >> (64 - k));
}
static uint64_t s[2];
uint64_t Next(void) {
const uint64_t s0 = s[0];
uint64_t s1 = s[1];
const uint64_t result = rotl(s0 + s1, 17) + s0;
s1 ^= s0;
s[0] = rotl(s0, 49) ^ s1 ^ (s1 << 21); // a, b
s[1] = rotl(s1, 28); // c
return result;
}
template<int N> void GenPermInner(int* dest, const uint64_t random) {
// Because N is a constant, the compiler can optimize the division
// by N with more lightweight operations like shifts and additions
const int pos = random % N;
const int t = dest[pos];
dest[pos] = dest[0];
dest[0] = t;
return GenPermInner<N-1>(dest+1, random / N);
}
template<> void GenPermInner<0>(int*, const uint64_t) {
return;
}
template<> void GenPermInner<1>(int*, const uint64_t) {
return;
}
// Assume the array |dest| must have enough space for N items
void GenPerm(int* dest, const int N) {
switch(N) {
case 0:
case 1:
return;
case 2:
return GenPermInner<2>(dest, Next());
case 3:
return GenPermInner<3>(dest, Next());
case 4:
return GenPermInner<4>(dest, Next());
case 5:
return GenPermInner<5>(dest, Next());
case 6:
return GenPermInner<6>(dest, Next());
case 7:
return GenPermInner<7>(dest, Next());
case 8:
return GenPermInner<8>(dest, Next());
case 9:
return GenPermInner<9>(dest, Next());
case 10:
return GenPermInner<10>(dest, Next());
case 11:
return GenPermInner<11>(dest, Next());
case 12:
return GenPermInner<12>(dest, Next());
// You can continue with larger numbers, so long as (N!-1) fits 64 bits
default: {
const uint64_t random = Next();
const int pos = random % N;
const int t = dest[pos];
dest[pos] = dest[0];
dest[0] = t;
return GenPerm(dest+1, N-1);
}
}
}
int main() {
std::random_device rd;
uint32_t* seed = reinterpret_cast<uint32_t*>(s);
for(int i=0; i<4; i++) {
seed[i] = rd();
}
int dest[20];
const int N = 12;
// No need to init again and again
for(int j=0; j<N; j++) {
dest[j] =j;
}
for(int i=0; i<10; i++) {
GenPerm(dest, N);
// Or, if you know N at compile-time, call directly
// GenPermInner<N>(dest, Next());
for(int j=0; j<N; j++) {
std::cout << dest[j] << ' ';
}
std::cout << std::endl;
}
return 0;
}

compact form of many for loop in C++

I have a piece of code as follows, and the number of for loops is determined by n which is known at compile time. Each for loop iterates over the values 0 and 1. Currently, my code looks something like this
for(int in=0;in<2;in++){
for(int in_1=0;in_1<2;in_1++){
for(int in_2=0;in_2<2;in_2++){
// ... n times
for(int i2=0;i2<2;i2++){
for(int i1=0;i1<2;i1++){
d[in][in_1][in_2]...[i2][i1] =updown(in)+updown(in_1)+...+updown(i1);
}
}
// ...
}
}
}
Now my question is whether one can write it in a more compact form.
The n bits in_k can be interpreted as the representation of one integer less than 2^n.
This allows easily to work with a 1-D array (vector) d[.].
In practice, an interger j corresponds to
j = in[0] + 2*in[1] + ... + 2^n-1*in[n-1]
Moreover, a direct implementation is O(NlogN). (N = 2^n)
A recursive solution is possible, for example using
f(val, n) = updown(val%2) + f(val/2, n-1) and f(val, 0) = 0.
This would correspond to a O(N) complexity, at the condition to introduce memoization, not implemented here.
Result:
0 : 0
1 : 1
2 : 1
3 : 2
4 : 1
5 : 2
6 : 2
7 : 3
8 : 1
9 : 2
10 : 2
11 : 3
12 : 2
13 : 3
14 : 3
15 : 4
#include <iostream>
#include <vector>
int up_down (int b) {
if (b) return 1;
return 0;
}
int f(int val, int n) {
if (n < 0) return 0;
return up_down (val%2) + f(val/2, n-1);
}
int main() {
const int n = 4;
int size = 1;
for (int i = 0; i < n; ++i) size *= 2;
std::vector<int> d(size, 0);
for (int i = 0; i < size; ++i) {
d[i] = f(i, n);
}
for (int i = 0; i < size; ++i) {
std::cout << i << " : " << d[i] << '\n';
}
return 0;
}
As mentioned above, the recursive approach allows a O(N) complexity, at the condition to implement memoization.
Another possibility is to use a simple iterative approach, in order to get this O(N) complexity.
(here N represents to total number of data)
#include <iostream>
#include <vector>
int up_down (int b) {
if (b) return 1;
return 0;
}
int main() {
const int n = 4;
int size = 1;
for (int i = 0; i < n; ++i) size *= 2;
std::vector<int> d(size, 0);
int size_block = 1;
for (int i = 0; i < n; ++i) {
for (int j = size_block-1; j >= 0; --j) {
d[2*j+1] = d[j] + up_down(1);
d[2*j] = d[j] + up_down(0);
}
size_block *= 2;
}
for (int i = 0; i < size; ++i) {
std::cout << i << " : " << d[i] << '\n';
}
return 0;
}
You can refactor your code slightly like this:
for(int in=0;in<2;in++) {
auto& dn = d[in];
auto updown_n = updown(in);
for(int in_1=0;in_1<2;in_1++) {
// dn_1 == d[in][in_1]
auto& dn_1 = dn[in_1];
// updown_n_1 == updown(in)+updown(in_1)
auto updown_n_1 = updown_n + updown(in_1);
for(int in_2=0;in_2<2;in_2++) {
// dn_2 == d[in][in_1][in_2]
auto& dn_2 = dn_1[in_2];
// updown_n_2 == updown(in)+updown(in_1)+updown(in_2)
auto updown_n_2 = updown_n_1 + updown(in_2);
.
.
.
for(int i2=0;i2<2;i1++) {
// d2 == d[in][in_1][in_2]...[i2]
auto& d2 = d3[i2];
// updown_2 = updown(in)+updown(in_1)+updown(in_2)+...+updown(i2)
auto updown_2 = updown_3 + updown(i2);
for(int i1=0;i1<2;i1++) {
// d1 == d[in][in_1][in_2]...[i2][i1]
auto& d1 = d2[i1];
// updown_1 = updown(in)+updown(in_1)+updown(in_2)+...+updown(i2)+updown(i1)
auto updown_1 = updown_2 + updown(i1);
// d[in][in_1][in_2]...[i2][i1] = updown(in)+updown(in_1)+...+updown(i1);
d1 = updown_1;
}
}
}
}
}
And make this into a recursive function now:
template<std::size_t N, typename T>
void loop(T& d) {
for (int i = 0; i < 2; ++i) {
loop<N-1>(d[i], updown(i));
}
}
template<std::size_t N, typename T, typename U>
typename std::enable_if<N != 0>::type loop(T& d, U updown_result) {
for (int i = 0; i < 2; ++i) {
loop<N-1>(d[i], updown_result + updown(i));
}
}
template<std::size_t N, typename T, typename U>
typename std::enable_if<N == 0>::type loop(T& d, U updown_result) {
d = updown_result;
}
If your type is int d[2][2][2]...[2][2]; or int*****... d;, you can also stop when the type isn't an array or pointer instead of manually specifying N (or change for whatever the type of d[0][0][0]...[0][0] is)
Here's a version that does that with a recursive lambda:
auto loop = [](auto& self, auto& d, auto updown_result) -> void {
using d_t = typename std::remove_cv<typename std::remove_reference<decltype(d)>::type>::type;
if constexpr (!std::is_array<d_t>::value && !std::is_pointer<d_t>::value) {
// Last level of nesting
d = updown_result;
} else {
for (int i = 0; i < 2; ++i) {
self(self, d[i], updown_result + updown(i));
}
}
};
for (int i = 0; i < 2; ++i) {
loop(loop, d[i], updown(i));
}
I am assuming that it is a multi-dimensional matrix. You may have to solve it mathematically first and then write the respective equations in the program.

How to select all possible combination of elements from a set using recursion

This is a question from hackerrank; I am trying to understand how recursion works.
The task at hand is:
Find the number of ways that a given integer, X, can be expressed
as the sum of the Nth power of unique, natural numbers.
So for example, if X = 100 and N = 2
100 = 10² = 6² + 8² = 1² + 3² + 4² + 5² + 7²
so 100 can be expressed as the square of unique natural numbers in 3
different ways, so our output is 3.
Here is my code,:
#include <cmath>
#include <iostream>
using namespace std;
int numOfSums(int x, int& n, const int k) {
int count = 0, j;
for (int i = (k + 1); (j = (int) pow(i, n)) <= x; i++) {
j = x - j;
if (j == 0)
count++;
else
count += numOfSums(j, n, i);
}
return count;
}
int main() {
int x, n;
cin >> x >> n;
cout << numOfSums(x, n, 0) << endl;
return 0;
}
But when I input x = 100 and n = 2, it's outputting 2, not 3. What's wrong with the code?
Link to the question: https://www.hackerrank.com/challenges/the-power-sum
Your example code returns 3 when I run it using this main():
#include <iostream>
int main() {
int x = 100, n = 2;
cout << numOfSums(x, n, 0) << endl;
return 0;
}
The problem is likely that you're using double std::pow(double, int), but you're not rounding the result to nearest integer ((int) casts round down). You should add ½ before truncating:
j = static_cast<int>(pow(i, n) + 0.5)
I've used the more-C++ style of cast, which I find clearer.
It would be more efficient to implement your own equivalent of std::pow() that operates on integers. That can be recursive, too, if you want:
unsigned long pow(unsigned long x, unsigned long n)
{
return n ? x * pow(x, n-1) : 1;
}
An iterative version is more efficient (or a tail-recursive version and suitable optimizing compiler).
Reduced version, with my changes:
template<typename T>
T powi(T x, T n)
{
T r{1};
for (; n; n /= 2) {
r *= n%2 ? x : 1;
x *= x;
}
return r;
}
template<typename T>
T numOfSums(T x, T n, T i = {})
{
T count{}, j;
for (++i; (j = powi(i, n)) <= x; ++i)
count += j == x ? 1 : numOfSums(x-j, n, i);
return count;
}
#include <iostream>
int main()
{
unsigned long int x = 100, n = 2;
std::cout << numOfSums(x, n) << std::endl;
return 0;
}

Algorithm that builds heap

I am trying to implement build_max_heap function that creates the heap( as it is written in Cormen's "introduction do algorithms" ). But I am getting strange error and i could not localize it. My program successfully give random numbers to table, show them but after build_max_heap() I am getting strange numbers, that are probably because somewhere my program reached something out of the table, but I can not find this error. I will be glad for any help.
For example I get the table:
0 13 18 0 22 15 24 19 5 23
And my output is:
24 7 5844920 5 22 15 18 19 0 23
My code:
#include <iostream>
#include <ctime>
#include <stdlib.h>
const int n = 12; // the length of my table, i will onyl use indexes 1...n-1
struct heap
{
int *tab;
int heap_size;
};
void complete_with_random(heap &heap2)
{
srand(time(NULL));
for (int i = 1; i <= heap2.heap_size; i++)
{
heap2.tab[i] = rand() % 25;
}
heap2.tab[0] = 0;
}
void show(heap &heap2)
{
for (int i = 1; i < heap2.heap_size; i++)
{
std::cout << heap2.tab[i] << " ";
}
}
int parent(int i)
{
return i / 2;
}
int left(int i)
{
return 2 * i;
}
int right(int i)
{
return 2 * i + 1;
}
void max_heapify(heap &heap2, int i)
{
if (i >= heap2.heap_size || i == 0)
{
return;
}
int l = left(i);
int r = right(i);
int largest;
if (l <= heap2.heap_size || heap2.tab[l] > heap2.tab[i])
{
largest = l;
}
else
{
largest = i;
}
if (r <= heap2.heap_size || heap2.tab[r] > heap2.tab[i])
{
largest = r;
}
if (largest != i)
{
std::swap(heap2.tab[i], heap2.tab[largest]);
max_heapify(heap2, largest);
}
}
void build_max_heap(heap &heap2)
{
for (int i = heap2.heap_size / 2; i >= 1; i--)
{
max_heapify(heap2, i);
}
}
int main()
{
heap heap1;
heap1.tab = new int[n];
heap1.heap_size = n - 1;
complete_with_random(heap1);
show(heap1);
std::cout << std::endl;
build_max_heap(heap1);
show(heap1);
}
Indeed, the table is accessed with out-of-bounds indexes.
if (l <= heap2.heap_size || heap2.tab[l] > heap2.tab[i])
^^
I think you meant && in this condition.
The same for the next branch with r.
In case you're still having problems, below is my own implementation that you might use for reference. It was also based on Cormen et al. book, so it's using more or less the same terminology. It may have arbitrary types for the actual container, the comparison and the swap functions. It provides a public queue-like interface, including key incrementing.
Because it's part of a larger software collection, it's using a few entities that are not defined here, but I hope the algorithms are still clear. CHECK is only an assertion mechanism, you can ignore it. You may also ignore the swap member and just use std::swap.
Some parts of the code are using 1-based offsets, others 0-based, and conversion is necessary. The comments above each method indicate this.
template <
typename T,
typename ARRAY = array <T>,
typename COMP = fun::lt,
typename SWAP = fun::swap
>
class binary_heap_base
{
protected:
ARRAY a;
size_t heap_size;
SWAP swap_def;
SWAP* swap;
// 1-based
size_t parent(const size_t n) { return n / 2; }
size_t left (const size_t n) { return n * 2; }
size_t right (const size_t n) { return n * 2 + 1; }
// 1-based
void heapify(const size_t n = 1)
{
T& x = a[n - 1];
size_t l = left(n);
size_t r = right(n);
size_t select =
(l <= heap_size && COMP()(x, a[l - 1])) ?
l : n;
if (r <= heap_size && COMP()(a[select - 1], a[r - 1]))
select = r;
if (select != n)
{
(*swap)(x, a[select - 1]);
heapify(select);
}
}
// 1-based
void build()
{
heap_size = a.length();
for (size_t n = heap_size / 2; n > 0; n--)
heapify(n);
}
// 1-based
size_t advance(const size_t k)
{
size_t n = k;
while (n > 1)
{
size_t pn = parent(n);
T& p = a[pn - 1];
T& x = a[n - 1];
if (!COMP()(p, x)) break;
(*swap)(p, x);
n = pn;
}
return n;
}
public:
binary_heap_base() { init(); set_swap(); }
binary_heap_base(SWAP& s) { init(); set_swap(s); }
binary_heap_base(const ARRAY& a) { init(a); set_swap(); }
binary_heap_base(const ARRAY& a, SWAP& s) { init(a); set_swap(s); }
void init() { a.init(); build(); }
void init(const ARRAY& a) { this->a = a; build(); }
void set_swap() { swap = &swap_def; }
void set_swap(SWAP& s) { swap = &s; }
bool empty() { return heap_size == 0; }
size_t size() { return heap_size; }
size_t length() { return heap_size; }
void reserve(const size_t len) { a.reserve(len); }
const T& top()
{
CHECK (heap_size != 0, eshape());
return a[0];
}
T pop()
{
CHECK (heap_size != 0, eshape());
T x = a[0];
(*swap)(a[0], a[heap_size - 1]);
heap_size--;
heapify();
return x;
}
// 0-based
size_t up(size_t n, const T& x)
{
CHECK (n < heap_size, erange());
CHECK (!COMP()(x, a[n]), ecomp());
a[n] = x;
return advance(n + 1) - 1;
}
// 0-based
size_t push(const T& x)
{
if (heap_size == a.length())
a.push_back(x);
else
a[heap_size] = x;
return advance(++heap_size) - 1;
}
};

display value of pointer after function alter with array C

When i run this it gives a string of numbers and letters (an address im guessing) where have i gone wrong? I im trying to display the highest and lowest numbers
intArray is a 1d array of 10 numbers and size = 10
void greatAndSmall(int intsAray[], const int SZ, int greatAdd, int smallAdd) //def func
{
while (x < SZ)
{
if (intsAray[x] > greatAdd)
greatAdd = intsAray[x];
else
break;
if (intsAray[x] < smallAdd)
smallAdd = intsAray[x];
else
break;
x = x + 1;
}
}
greatAndSmall(intArray, SIZE, &great, &small); //IN MAIN FUNC
cout << "The smallest of these numbers is: " << small << "\n"; //display smallest
cout << "The largest of these numbers is: " << great; //display greatest
Your code, as written, is not valid C/C++ and won't compile. It also has logical problems and won't work even if it would compile (the breaks you are using are completely redundant.)
Just use this code:
void greatAndSmall (int intsArray [], int sz, int * largest, int * smallest)
{
if (sz < 1) return;
*largest = *smallest = intsArray[0];
for (int i = 1; i < sz; ++i)
{
if (intsArray[i] > *largest) *largest = intsArray[i];
if (intsArray[i] < *smallest) *smallest = intsArray[i];
}
}
IMPORTANT NOTE: This is C code. Do not for one second think that, just because you use cout, this can be counted as C++ code.
Just for comparison, this is how one might write this in C++:
// Largest value in first and smallest value in second
std::pair<int, int> greatAndSmall (std::vector<int> const & c)
{
if (c.empty()) return {};
std::pair<int, int> ret (c[0], c[0]);
for (unsigned i = 1; i < c.size(); ++i)
{
if (c[i] > ret.first) ret.first = c[i];
if (c[i] < ret.second) ret.second = c[i];
}
return ret;
}
or this more general (and admittedly more complex) version:
template<typename C>
auto greatAndSmall (C const & c)
-> std::pair<typename C::value_type, typename C::value_type>
{
if (c.empty()) return {};
auto ret = std::make_pair(*c.begin(), *c.begin());
for (auto const & v : c)
{
if (v > ret.first) ret.first = v;
if (v < ret.second) ret.second = v;
}
return ret;
}