How to increase element in a set? - c++

I'm trying to decrease the largest numbers until I run out of m to decrease. For that I thought the set was the best solution, so I tried it. It didn't work. This is the first time I run into an error like this. Is there any way to change elements "mutability". If you have any advice for a better solution feel free to answer.
set<pair<float, long int>> t;
long unsigned n, m;
scanf("%lu%lu", &n, &m);
for (long unsigned i = 0; i < n; i++)
{
float p;
scanf("%f", &p);
t.insert({p, 1});
}
m -= n;
while (m)
{
(*--t.end()).second++;
(*--t.end()).first *= ((*--t.end()).second - 1) / (*--t.end()).second;
m--;
}

Is there any way to change elements "mutability"
Not the elements of a set. They are always const. You may not modify them.
What you can do instead is make a copy of the element, erase the element from the set, and insert a modified value.
P.S. Instead of (*--t.end()), use t.back()

Related

Function to check if an array is a permutation

I have to write a function which accepts an int array parameter and checks to see if it is a
permutation.
I tried this so far:
bool permutationChecker(int arr[], int n){
for (int i = 0; i < n; i++){
//Check if the array is the size of n
if (i == n){
return true;
}
if (i == arr[n]){
return true;
}
}
return false;
}
but the output says some arrays are permutations even though they are not.
When you write i == arr[n], that doesn't check whether i is in the array; that checks whether the element at position n is i. Now, that's even worse here, as the array size is n, so there's no valid element at position n: it's UB, array is overindexed.
If you'd like to check whether i is in the array, you need to scan each element of the array. You can do this using std::find(). Either that, or you might sort (a copy of) the array, then check if i is at position i:
bool isPermutation(int arr[], int n){
int* arr2 = new int[n]; // consider using std::array<> / std::vector<> if allowed
std::copy(arr, arr + n, arr2);
std::sort(arr2, arr2 + n);
for (int i = 0; i < n; i++){
if (i != arr2[i]){
delete[] arr2;
return false;
}
}
delete[] arr2;
return true;
}
One approach to checking that the input contains one of each value is to create an array of flags (acting like a set), and for each value in your input you set the flag to true for the corresponding index. If that flag is already set, then it's not unique. And if the value is out of range then you instantly know it's not a permutation.
Now, you would normally expect to allocate additional data for this temporary set. But, since your function accepts the input as non-constant data, we can use a trick where you use the same array but store extra information by making values negative.
It will even work for all positive int values, since as of C++20 the standard now guarantees 2's complement representation. That means for every positive integer, a negative integer exists (but not the other way around).
bool isPermutation(int arr[], int n)
{
// Ensure everything is within the valid range.
for (int i = 0; i < n; i++)
{
if (arr[i] < 1 || arr[i] > n) return false;
}
// Check for uniqueness. For each value, use it to index back into the array and then
// negate the value stored there. If already negative, the value is not unique.
int count = 0;
while (count < n)
{
int index = std::abs(arr[count]) - 1;
if (arr[index] < 0)
{
break;
}
arr[index] = -arr[index];
count++;
}
// Undo any negations done by the step above
for (int i = 0; i < count; i++)
{
int index = std::abs(arr[i]) - 1;
arr[index] = std::abs(arr[index]);
}
return count == n;
}
Let me be clear that using tricky magic is usually not the kind of solution you should go for because it's inevitably harder to understand and maintain code like this. That should be evident simply by looking at the code above. But let's say, hypothetically, you want to avoid any additional memory allocation, your data type is signed, and you want to do the operation in linear time... Well, then this might be useful.
A permutation p of id=[0,...,n-1] a bijection into id. Therefore, no value in p may repeat and no value may be >=n. To check for permutations you somehow have to verify these properties. One option is to sort p and compare it for equality to id. Another is to count the number of individual values set.
Your approach would almost work if you checked (i == arr[i]) instead of (i == arr[n]) but then you would need to sort the array beforehand, otherwise only id will pass your check. Furthermore the check (i == arr[n]) exhibits undefined behaviour because it accesses one element past the end of the array. Lastly the check (i == n) doesn't do anything because i goes from 0 to n-1 so it will never be == n.
With this information you can repair your code, but beware that this approach will destroy the original input.
If you are forced to play with arrays, perhaps your array has fixed size. For example: int arr[3] = {0,1,2};
If this were the case you could use the fact that the size is known at compile time and use an std::bitset. [If not use your approach or one of the others given here.]
template <std::size_t N>
bool isPermutation(int const (&arr)[N]) {
std::bitset<N> bits;
for (int a: arr) {
if (static_cast<std::size_t>(a) < N)
bits.set(a);
}
return bits.all();
}
(live demo)
You don't have to pass the size because C++ can infer it at compile time. This solution also does not allocate additional dynamic memory but it will get into problems for large arrays (sat > 1 million entries) because std::bitset lives on automatic memory and therefore on the stack.

Segmentation fault in online compilers

The code below works fine in gdb and VS code but other online compilers keep throwing "segmentation fault". Can anyone please help me with this? Every question I try to solve I keep getting this error.
For example:
Given an array of integers. Find the Inversion Count in the array.
Inversion Count: For an array, inversion count indicates how far (or close) the array is from being sorted. If the array is already sorted then the inversion count is 0. If an array is sorted in the reverse order then the inversion count is the maximum.
Formally, two elements a[i] and a[j] form an inversion if a[i] > a[j] and i < j.
Code:
long long int inversionCount(long long arr[], long long N) {
vector<long long> v;
long long int count = 0;
for (int i = 0; i < N; i++) v[i]= arr[i];
auto min = min_element(arr, arr+ N);
auto max = max_element(arr, arr+ N);
swap(v[0], *min);
v.erase(max);
v.push_back(*max);
for (int i = 0; i < N; i++) {
if(v[i] > v[i+1]) {
swap(v[i],v[i+1]);
count++;
}
return count;
}
}
You have a number of problems here. For one, you try to use elements in v without ever allocating space for them (i.e., you're using subscripts to refer to elements of v even though its size is still zero elements. I'd usually use the constructor that takes two iterators to copy an existing collection (and pointers can be iterators too).
std::vector<long long> v { arr, arr+N};
Assuming you fix that, this:
v.erase(max);
... invalidates max and every other iterator or reference to any point between the element max previously pointed to, and the end of the collection. Which means that this:
v.push_back(*max);
...is attempting to dereference an invalid iterator, which produces undefined behavior.

Given index i,j(j>=i) How to find the frequency of a A[j] at a subarray(i,j)?

Given an array A of integers , I am trying to find out at a given position j , how many times A[j] occurs from every i=0 to i=j in A. I have devised a solution like below
map<int,int> CF[400005];
for(int i=0;i<n;++i)
{
cin>>A[i];
if(i!=0)
CF[i-1] = CF[i];
++CF[i][A[i]];
}
than I can answer each query in logn time. But this procedure uses too much memory. Is there any way of doing it using less memory?
for more clarification , you can see the problem I am trying to solve http://codeforces.com/contest/190/problem/D
Create an array B of the same size as A and a map C. For each j in B[j] keep track of the number of occurrences of A[j] before j. In C keep track of the last occurrence of given element. Then you answer queries in constant time and it requires just O(n) memory.
Sorry for my pseudo-C++.
map<int,int> C;
int B[n]; // zeros
for(int i=0;i<n;++i)
{
cin >> A[i];
int prev = C[A[i]]; // let me assume it gives -1 if no element
if (pref == -1) // this is the fist occurrence of B[i]
B[i] = 1;
else // if not the first, then add previous occurrences
B[i] = B[prev] + 1
C[A[i]] = i; // keep track where the last info about A[j] is
}
Didn't give this too much time, but maybe instead of using a map will all the positions, use a list where for each element you put the points where the count of that element changes, something towards this:
struct count_info
{
int index;
int count;
count_info* next;
};
...
std::map<int, count_info*> data;
, and then look up the right position in that queue. You still need one map, but then underneath you would only have a list of these pointers, and on query you look-up the A[i] in the map, and then go through the list while i > index && next && i < next->index. Of course, if logn is a must then this fails because the list look-up is n at worst.

Optimized way to find M largest elements in an NxN array using C++

I need a blazing fast way to find the 2D positions and values of the M largest elements in an NxN array.
right now I'm doing this:
struct SourcePoint {
Point point;
float value;
}
SourcePoint* maxValues = new SourcePoint[ M ];
maxCoefficients = new SourcePoint*[
for (int j = 0; j < rows; j++) {
for (int i = 0; i < cols; i++) {
float sample = arr[i][j];
if (sample > maxValues[0].value) {
int q = 1;
while ( sample > maxValues[q].value && q < M ) {
maxValues[q-1] = maxValues[q]; // shuffle the values back
q++;
}
maxValues[q-1].value = sample;
maxValues[q-1].point = Point(i,j);
}
}
}
A Point struct is just two ints - x and y.
This code basically does an insertion sort of the values coming in. maxValues[0] always contains the SourcePoint with the lowest value that still keeps it within the top M values encoutered so far. This gives us a quick and easy bailout if sample <= maxValues, we don't do anything. The issue I'm having is the shuffling every time a new better value is found. It works its way all the way down maxValues until it finds it's spot, shuffling all the elements in maxValues to make room for itself.
I'm getting to the point where I'm ready to look into SIMD solutions, or cache optimisations, since it looks like there's a fair bit of cache thrashing happening. Cutting the cost of this operation down will dramatically affect the performance of my overall algorithm since this is called many many times and accounts for 60-80% of my overall cost.
I've tried using a std::vector and make_heap, but I think the overhead for creating the heap outweighed the savings of the heap operations. This is likely because M and N generally aren't large. M is typically 10-20 and N 10-30 (NxN 100 - 900). The issue is this operation is called repeatedly, and it can't be precomputed.
I just had a thought to pre-load the first M elements of maxValues which may provide some small savings. In the current algorithm, the first M elements are guaranteed to shuffle themselves all the way down just to initially fill maxValues.
Any help from optimization gurus would be much appreciated :)
A few ideas you can try. In some quick tests with N=100 and M=15 I was able to get it around 25% faster in VC++ 2010 but test it yourself to see whether any of them help in your case. Some of these changes may have no or even a negative effect depending on the actual usage/data and compiler optimizations.
Don't allocate a new maxValues array each time unless you need to. Using a stack variable instead of dynamic allocation gets me +5%.
Changing g_Source[i][j] to g_Source[j][i] gains you a very little bit (not as much as I'd thought there would be).
Using the structure SourcePoint1 listed at the bottom gets me another few percent.
The biggest gain of around +15% was to replace the local variable sample with g_Source[j][i]. The compiler is likely smart enough to optimize out the multiple reads to the array which it can't do if you use a local variable.
Trying a simple binary search netted me a small loss of a few percent. For larger M/Ns you'd likely see a benefit.
If possible try to keep the source data in arr[][] sorted, even if only partially. Ideally you'd want to generate maxValues[] at the same time the source data is created.
Look at how the data is created/stored/organized may give you patterns or information to reduce the amount of time to generate your maxValues[] array. For example, in the best case you could come up with a formula that gives you the top M coordinates without needing to iterate and sort.
Code for above:
struct SourcePoint1 {
int x;
int y;
float value;
int test; //Play with manual/compiler padding if needed
};
If you want to go into micro-optimizations at this point, the a simple first step should be to get rid of the Points and just stuff both dimensions into a single int. That reduces the amount of data you need to shift around, and gets SourcePoint down to being a power of two long, which simplifies indexing into it.
Also, are you sure that keeping the list sorted is better than simply recomputing which element is the new lowest after each time you shift the old lowest out?
(Updated 22:37 UTC 2011-08-20)
I propose a binary min-heap of fixed size holding the M largest elements (but still in min-heap order!). It probably won't be faster in practice, as I think OPs insertion sort probably has decent real world performance (at least when the recommendations of the other posteres in this thread are taken into account).
Look-up in the case of failure should be constant time: If the current element is less than the minimum element of the heap (containing the max M elements) we can reject it outright.
If it turns out that we have an element bigger than the current minimum of the heap (the Mth biggest element) we extract (discard) the previous min and insert the new element.
If the elements are needed in sorted order the heap can be sorted afterwards.
First attempt at a minimal C++ implementation:
template<unsigned size, typename T>
class m_heap {
private:
T nodes[size];
static const unsigned last = size - 1;
static unsigned parent(unsigned i) { return (i - 1) / 2; }
static unsigned left(unsigned i) { return i * 2; }
static unsigned right(unsigned i) { return i * 2 + 1; }
void bubble_down(unsigned int i) {
for (;;) {
unsigned j = i;
if (left(i) < size && nodes[left(i)] < nodes[i])
j = left(i);
if (right(i) < size && nodes[right(i)] < nodes[j])
j = right(i);
if (i != j) {
swap(nodes[i], nodes[j]);
i = j;
} else {
break;
}
}
}
void bubble_up(unsigned i) {
while (i > 0 && nodes[i] < nodes[parent(i)]) {
swap(nodes[parent(i)], nodes[i]);
i = parent(i);
}
}
public:
m_heap() {
for (unsigned i = 0; i < size; i++) {
nodes[i] = numeric_limits<T>::min();
}
}
void add(const T& x) {
if (x < nodes[0]) {
// reject outright
return;
}
nodes[0] = x;
swap(nodes[0], nodes[last]);
bubble_down(0);
}
};
Small test/usage case:
#include <iostream>
#include <limits>
#include <algorithm>
#include <vector>
#include <stdlib.h>
#include <assert.h>
#include <math.h>
using namespace std;
// INCLUDE TEMPLATED CLASS FROM ABOVE
typedef vector<float> vf;
bool compare(float a, float b) { return a > b; }
int main()
{
int N = 2000;
vf v;
for (int i = 0; i < N; i++) v.push_back( rand()*1e6 / RAND_MAX);
static const int M = 50;
m_heap<M, float> h;
for (int i = 0; i < N; i++) h.add( v[i] );
sort(v.begin(), v.end(), compare);
vf heap(h.get(), h.get() + M); // assume public in m_heap: T* get() { return nodes; }
sort(heap.begin(), heap.end(), compare);
cout << "Real\tFake" << endl;
for (int i = 0; i < M; i++) {
cout << v[i] << "\t" << heap[i] << endl;
if (fabs(v[i] - heap[i]) > 1e-5) abort();
}
}
You're looking for a priority queue:
template < class T, class Container = vector<T>,
class Compare = less<typename Container::value_type> >
class priority_queue;
You'll need to figure out the best underlying container to use, and probably define a Compare function to deal with your Point type.
If you want to optimize it, you could run a queue on each row of your matrix in its own worker thread, then run an algorithm to pick the largest item of the queue fronts until you have your M elements.
A quick optimization would be to add a sentinel value to yourmaxValues array. If you have maxValues[M].value equal to std::numeric_limits<float>::max() then you can eliminate the q < M test in your while loop condition.
One idea would be to use the std::partial_sort algorithm on a plain one-dimensional sequence of references into your NxN array. You could probably also cache this sequence of references for subsequent calls. I don't know how well it performs, but it's worth a try - if it works good enough, you don't have as much "magic". In particular, you don't resort to micro optimizations.
Consider this showcase:
#include <algorithm>
#include <iostream>
#include <vector>
#include <stddef.h>
static const int M = 15;
static const int N = 20;
// Represents a reference to a sample of some two-dimensional array
class Sample
{
public:
Sample( float *arr, size_t row, size_t col )
: m_arr( arr ),
m_row( row ),
m_col( col )
{
}
inline operator float() const {
return m_arr[m_row * N + m_col];
}
bool operator<( const Sample &rhs ) const {
return (float)other < (float)*this;
}
int row() const {
return m_row;
}
int col() const {
return m_col;
}
private:
float *m_arr;
size_t m_row;
size_t m_col;
};
int main()
{
// Setup a demo array
float arr[N][N];
memset( arr, 0, sizeof( arr ) );
// Put in some sample values
arr[2][1] = 5.0;
arr[9][11] = 2.0;
arr[5][4] = 4.0;
arr[15][7] = 3.0;
arr[12][19] = 1.0;
// Setup the sequence of references into this array; you could keep
// a copy of this sequence around to reuse it later, I think.
std::vector<Sample> samples;
samples.reserve( N * N );
for ( size_t row = 0; row < N; ++row ) {
for ( size_t col = 0; col < N; ++col ) {
samples.push_back( Sample( (float *)arr, row, col ) );
}
}
// Let partial_sort find the M largest entry
std::partial_sort( samples.begin(), samples.begin() + M, samples.end() );
// Print out the row/column of the M largest entries.
for ( std::vector<Sample>::size_type i = 0; i < M; ++i ) {
std::cout << "#" << (i + 1) << " is " << (float)samples[i] << " at " << samples[i].row() << "/" << samples[i].col() << std::endl;
}
}
First of all, you are marching through the array in the wrong order!
You always, always, always want to scan through memory linearly. That means the last index of your array needs to be changing fastest. So instead of this:
for (int j = 0; j < rows; j++) {
for (int i = 0; i < cols; i++) {
float sample = arr[i][j];
Try this:
for (int i = 0; i < cols; i++) {
for (int j = 0; j < rows; j++) {
float sample = arr[i][j];
I predict this will make a bigger difference than any other single change.
Next, I would use a heap instead of a sorted array. The standard <algorithm> header already has push_heap and pop_heap functions to use a vector as a heap. (This will probably not help all that much, though, unless M is fairly large. For small M and a randomized array, you do not wind up doing all that many insertions on average... Something like O(log N) I believe.)
Next after that is to use SSE2. But that is peanuts compared to marching through memory in the right order.
You should be able to get nearly linear speedup with parallel processing.
With N CPUs, you can process a band of rows/N rows (and all columns) with each CPU, finding the top M entries in each band. And then do a selection sort to find the overall top M.
You could probably do that with SIMD as well (but here you'd divide up the task by interleaving columns instead of banding the rows). Don't try to make SIMD do your insertion sort faster, make it do more insertion sorts at once, which you combine at the end using a single very fast step.
Naturally you could do both multi-threading and SIMD, but on a problem which is only 30x30, that's not likely to be worthwhile.
I tried replacing float by double, and interestingly that gave me a speed improvement of about 20% (using VC++ 2008). That's a bit counterintuitive, but it seems modern processors or compilers are optimized for double value processing.
Use a linked list to store the best yet M values. You'll still have to iterate over it to find the right spot, but the insertion is O(1). It would probably even be better than binary search and insertion O(N)+O(1) vs O(lg(n))+O(N).
Interchange the fors, so you're not accessing every N element in memory and trashing the cache.
LE: Throwing another idea that might work for uniformly distributed values.
Find the min, max in 3/2*O(N^2) comparisons.
Create anywhere from N to N^2 uniformly distributed buckets, preferably closer to N^2 than N.
For every element in the NxN matrix place it in bucket[(int)(value-min)/range], range=max-min.
Finally create a set starting from the highest bucket to the lowest, add elements from other buckets to it while |current set| + |next bucket| <=M.
If you get M elements you're done.
You'll likely get less elements than M, let's say P.
Apply your algorithm for the remaining bucket and get biggest M-P elements out of it.
If elements are uniform and you use N^2 buckets it's complexity is about 3.5*(N^2) vs your current solution which is about O(N^2)*ln(M).

Sum of all primes under 2 million

I made a program that returns the sum of all primes under 2 million. I really have no idea what's going on with this one, I get 142891895587 as my answer when the correct answer is 142913828922. It seems like its missing a few primes in there. I'm pretty sure the getPrime function works as it is supposed to. I used it a couple times before and worked correctly than. The code is as follows:
vector<int> getPrimes(int number);
int main()
{
unsigned long int sum = 0;
vector<int> primes = getPrimes(2000000);
for(int i = 0; i < primes.size(); i++)
{
sum += primes[i];
}
cout << sum;
return 0;
}
vector<int> getPrimes(int number)
{
vector<bool> sieve(number+1,false);
vector<int> primes;
sieve[0] = true;
sieve[1] = true;
for(int i = 2; i <= number; i++)
{
if(sieve[i]==false)
{
primes.push_back(i);
unsigned long int temp = i*i;
while(temp <= number)
{
sieve[temp] = true;
temp = temp + i;
}
}
}
return primes;
}
The expression i*i overflows because i is an int. It is truncated before being assigned to temp. To avoid the overflow, cast it: static_cast<unsigned long>( i ) * i.
Even better, terminate iteration before that condition occurs: for(int i = 2; i*i <= number; i++).
Tested fixed.
Incidentally, you're kinda (un)lucky that this doesn't produce extra primes as well as missing some: the int value is signed, and could be negative upon overflow, and by my reading of §4.7/2, that would cause the inner loop to skip.
You may be running into datatype limits: http://en.wikipedia.org/wiki/Long_integer.
This line is the problem:
unsigned long int temp = i*i;
I'll give you a hint. Take a closer look at the initial value you give to temp. What's the first value you exclude from sieve? Are there any other smaller multiples of i that should also be excluded? What different initial value could you use to make sure all the right numbers get skipped?
There are some techniques you can use to help figure this out yourself. One is to try to get your program working using a lower limit. Instead of 2 million, try, say, 30. It's small enough that you can calculate the correct answer quickly by hand, and even walk through your program on paper one line at a time. That will let you discover where things start to go wrong.
Another option is to use a debugger to walk through your program line-by-line. Debuggers are powerful tools, although they're not always easy to learn.
Instead of using a debugger to trace your program, you could print out messages as your program progressed. Say, have it print out each number in the result of getPrimes instead of just printing the sum. (That's another reason you'd want to try a lower limit first — to avoid being overwhelmed by the volume of output.)
Your platform must have 64-bit longs. This line:
unsigned long int temp = i * i;
does not compute correctly because i is declared int and the multiplication result is also int (32-bit). Force the multiplication to be long:
unsigned long int temp = (unsigned long int) i * i;
On my system, long is 32-bit, so I had to change both temp and sum to be unsigned long long.