Segmentation fault in online compilers - c++

The code below works fine in gdb and VS code but other online compilers keep throwing "segmentation fault". Can anyone please help me with this? Every question I try to solve I keep getting this error.
For example:
Given an array of integers. Find the Inversion Count in the array.
Inversion Count: For an array, inversion count indicates how far (or close) the array is from being sorted. If the array is already sorted then the inversion count is 0. If an array is sorted in the reverse order then the inversion count is the maximum.
Formally, two elements a[i] and a[j] form an inversion if a[i] > a[j] and i < j.
Code:
long long int inversionCount(long long arr[], long long N) {
vector<long long> v;
long long int count = 0;
for (int i = 0; i < N; i++) v[i]= arr[i];
auto min = min_element(arr, arr+ N);
auto max = max_element(arr, arr+ N);
swap(v[0], *min);
v.erase(max);
v.push_back(*max);
for (int i = 0; i < N; i++) {
if(v[i] > v[i+1]) {
swap(v[i],v[i+1]);
count++;
}
return count;
}
}

You have a number of problems here. For one, you try to use elements in v without ever allocating space for them (i.e., you're using subscripts to refer to elements of v even though its size is still zero elements. I'd usually use the constructor that takes two iterators to copy an existing collection (and pointers can be iterators too).
std::vector<long long> v { arr, arr+N};
Assuming you fix that, this:
v.erase(max);
... invalidates max and every other iterator or reference to any point between the element max previously pointed to, and the end of the collection. Which means that this:
v.push_back(*max);
...is attempting to dereference an invalid iterator, which produces undefined behavior.

Related

Function to check if an array is a permutation

I have to write a function which accepts an int array parameter and checks to see if it is a
permutation.
I tried this so far:
bool permutationChecker(int arr[], int n){
for (int i = 0; i < n; i++){
//Check if the array is the size of n
if (i == n){
return true;
}
if (i == arr[n]){
return true;
}
}
return false;
}
but the output says some arrays are permutations even though they are not.
When you write i == arr[n], that doesn't check whether i is in the array; that checks whether the element at position n is i. Now, that's even worse here, as the array size is n, so there's no valid element at position n: it's UB, array is overindexed.
If you'd like to check whether i is in the array, you need to scan each element of the array. You can do this using std::find(). Either that, or you might sort (a copy of) the array, then check if i is at position i:
bool isPermutation(int arr[], int n){
int* arr2 = new int[n]; // consider using std::array<> / std::vector<> if allowed
std::copy(arr, arr + n, arr2);
std::sort(arr2, arr2 + n);
for (int i = 0; i < n; i++){
if (i != arr2[i]){
delete[] arr2;
return false;
}
}
delete[] arr2;
return true;
}
One approach to checking that the input contains one of each value is to create an array of flags (acting like a set), and for each value in your input you set the flag to true for the corresponding index. If that flag is already set, then it's not unique. And if the value is out of range then you instantly know it's not a permutation.
Now, you would normally expect to allocate additional data for this temporary set. But, since your function accepts the input as non-constant data, we can use a trick where you use the same array but store extra information by making values negative.
It will even work for all positive int values, since as of C++20 the standard now guarantees 2's complement representation. That means for every positive integer, a negative integer exists (but not the other way around).
bool isPermutation(int arr[], int n)
{
// Ensure everything is within the valid range.
for (int i = 0; i < n; i++)
{
if (arr[i] < 1 || arr[i] > n) return false;
}
// Check for uniqueness. For each value, use it to index back into the array and then
// negate the value stored there. If already negative, the value is not unique.
int count = 0;
while (count < n)
{
int index = std::abs(arr[count]) - 1;
if (arr[index] < 0)
{
break;
}
arr[index] = -arr[index];
count++;
}
// Undo any negations done by the step above
for (int i = 0; i < count; i++)
{
int index = std::abs(arr[i]) - 1;
arr[index] = std::abs(arr[index]);
}
return count == n;
}
Let me be clear that using tricky magic is usually not the kind of solution you should go for because it's inevitably harder to understand and maintain code like this. That should be evident simply by looking at the code above. But let's say, hypothetically, you want to avoid any additional memory allocation, your data type is signed, and you want to do the operation in linear time... Well, then this might be useful.
A permutation p of id=[0,...,n-1] a bijection into id. Therefore, no value in p may repeat and no value may be >=n. To check for permutations you somehow have to verify these properties. One option is to sort p and compare it for equality to id. Another is to count the number of individual values set.
Your approach would almost work if you checked (i == arr[i]) instead of (i == arr[n]) but then you would need to sort the array beforehand, otherwise only id will pass your check. Furthermore the check (i == arr[n]) exhibits undefined behaviour because it accesses one element past the end of the array. Lastly the check (i == n) doesn't do anything because i goes from 0 to n-1 so it will never be == n.
With this information you can repair your code, but beware that this approach will destroy the original input.
If you are forced to play with arrays, perhaps your array has fixed size. For example: int arr[3] = {0,1,2};
If this were the case you could use the fact that the size is known at compile time and use an std::bitset. [If not use your approach or one of the others given here.]
template <std::size_t N>
bool isPermutation(int const (&arr)[N]) {
std::bitset<N> bits;
for (int a: arr) {
if (static_cast<std::size_t>(a) < N)
bits.set(a);
}
return bits.all();
}
(live demo)
You don't have to pass the size because C++ can infer it at compile time. This solution also does not allocate additional dynamic memory but it will get into problems for large arrays (sat > 1 million entries) because std::bitset lives on automatic memory and therefore on the stack.

C++ Removing empty elements from array

I only want to add a[i] into the result array if the condition is met, but this method causes empty elements in the array as it adds to result[i]. Is there a better way to do this?
for(int i=0; i<N; i++)
{
if(a[i]>=lower && a[i]<=upper)
{
count++;
result[i]=a[i];
}
}
you can let result stay empty at first, and only push_back a[i] when the condition is met:
std::vector<...> result;
for (int i = 0; i < N; i++)
{
if (a[i] >= lower && a[i] <= upper)
{
result.push_back(a[i]);
}
}
and count you can leave out, as result.size() will tell you how many elements satisfied the condition.
to get a more modern solution, like how Some programmer dude suggested, you can use std::copy_if in combination with std::back_inserter to achieve the same thing:
std::vector<...> result;
std::copy_if(a.begin(), a.end(), std::back_inserter(result),
[&](auto n) {
return n >= lower && n <= upper;
});
Arrays in c++ are dumb.
They are just pointers to the beginning of the array and don't know their length.
If you just arr[i] you have to be sure that you aren't out of bounds. In that case it is undefined behavior as you dont know what part of meory have you written over. You could as well write over a different variable or beginning of another array.
So when you try to add results to an array you already have to have the array created with enough space.
This boilerplate of deleting and creating dumb arrays so that you can grow the array is very efficiently done in std::vector container which remembers number of elements stored, number of elements that could be stored and the array itself. Every time you try to add element when the reserved space is full it creates a new array two times the size of the original one and copy the data over. Which is O(n) in worst case but O(1) in avarege case (it may deviate when the n is under certain threshold)
Then the answer from Stack Danny applies.
Also use emplace_back instead of push_back if you can it is able to construct the data type in place based on the constructor parameters and in other cases it tries to act like push_back. It basically does what you want the fastest way possible so you avoid as much copies as possible.
count=0;
for(int i=0; i<N; i++)
{
if(a[i]>=lower && a[i]<=upper)
{
count++;
result[count] = a[i];
}
}
Try this.
Your code was copying elements from a[i] and pasting it in result[i] at random places.
For example, if a[0] and a[2] meet the required condition, but a[1] doesn't, then your code will do the following:
result[0] = a[0];
result[2] = a[2];
Notice how result[1] remains empty because a[1] didn't meet the required condition. To avoid empty positions in the result array, use another variable for copying instead of i.

How to increase element in a set?

I'm trying to decrease the largest numbers until I run out of m to decrease. For that I thought the set was the best solution, so I tried it. It didn't work. This is the first time I run into an error like this. Is there any way to change elements "mutability". If you have any advice for a better solution feel free to answer.
set<pair<float, long int>> t;
long unsigned n, m;
scanf("%lu%lu", &n, &m);
for (long unsigned i = 0; i < n; i++)
{
float p;
scanf("%f", &p);
t.insert({p, 1});
}
m -= n;
while (m)
{
(*--t.end()).second++;
(*--t.end()).first *= ((*--t.end()).second - 1) / (*--t.end()).second;
m--;
}
Is there any way to change elements "mutability"
Not the elements of a set. They are always const. You may not modify them.
What you can do instead is make a copy of the element, erase the element from the set, and insert a modified value.
P.S. Instead of (*--t.end()), use t.back()

Heapsort CPU time

I have implemented Heapsort in c++, it indeed sorts the array, but is giving me higher CPU times than expected. It is supposed to spend nlog(n) flops, and it is supposed to sort it faster than, at least, bubblesort and insertionsort.
Instead, it is giving me higher cpu times than both bubblesort and insertion sort. For example, for a random array of ints (size 100000), I have the following cpu times (in nanoSeconds):
BubbleSort: 1.0957e+11
InsertionSort: 4.46416e+10
MergeSort: 7.2381e+08
HeapSort: 2.04685e+11
This is the code itself:
#include <iostream>
#include <assert.h>
#include <fstream>
#include <vector>
#include <random>
#include <chrono>
using namespace std;
typedef vector<int> intv;
typedef vector<float> flov;
typedef vector<double> douv;
void max_heapify(intv& , int);
void build_max_heap(intv& v);
double hesorti(intv& v)
{
auto t0 =chrono::high_resolution_clock::now();
build_max_heap(v);
int x = 0;
int i = v.size() - 1;
while( i > x)
{
swap(v[i],v[x]);
++x;
--i;
}
auto t1 = chrono::high_resolution_clock::now();
double T = chrono::duration_cast<chrono::nanoseconds>(t1-t0).count();
return T;
}
void max_heapify(intv& v, int i)
{
int left = i + 1, right = i + 2;
int largest;
if( left <= v.size() && v[left] > v[i])
{
largest = left;
}
else
{
largest = i;
}
if( right <= v.size() && v[right] > v[largest])
{
largest = right;
}
if( largest != i)
{
swap(v[i], v[largest]);
max_heapify(v,largest);
}
}
void build_max_heap(intv& v)
{
for( int i = v.size() - 2; i >= 0; --i)
{
max_heapify(v, i);
}
}
There's definitely a problem with the implementation of heap sort.
Looking at hesorti, you can see that it is just reversing the elements of the vector after calling build_max_heap. So somehow build_max_heap isn't just making a heap, it's actually reverse sorting the whole array.
max_heapify already has an issue: in the standard array layout of a heap, the children of the node at array index i are not i+1 and i+2, but 2i+1 and 2i+2. It's being called from the back of the array forwards from build_max_heap. What does this do?
The first time it is called, on the last two elements (when i=n-2), it simply makes sure the larger comes before the smaller. What happens when it is called after that?
Let's do some mathematical induction. Suppose, for all j>i, after calling max_heapify with index j on an array where the numbers v[j+1] through v[n-1] are already in descending order, that the result is that the numbers v[j] through v[n-1] are sorted in descending order. (We've already seen this is true when i=n-2.)
If v[i] is greater or equal to v[i+1] (and therefore v[i+2] as well), no swaps will occur and when max_heapify returns, we know that the values at i through n-1 are in descending order. What happens in the other case?
Here, largest is set to i+1, and by our assumption, v[i+1] is greater than or equal to v[i+2] (and in fact all v[k] for k>i+1) already, so the test against the 'right' index (i+2) never succeeds. v[i] is swapped with v[i+1], making v[i] the largest of the numbers from v[i] through v[n-1], and then max_heapify is called on the elements from i+1 to the end. By our induction assumption, this will sort those elements in descending order, and so we know that now all the elements from v[i] to v[n-1] are in descending order.
Through the power of induction then, we've proved that build_max_heap will reverse sort the elements. The way it does it, is to percolate the elements in turn, working from the back, into their correct position in the reverse-sorted elements that come after it.
Does this look familiar? It's an insertion sort! Except it's sorting in reverse, so when hesorti is called, the sequence of swaps puts it in the correct order.
Insertion sort also has O(n^2) average behaviour, which is why you're getting similar numbers as for bubble sort. It's slower almost certainly because of the convoluted implementation of the insertion step.
TL;DR: Your heap sort is not faster because it isn't actually a heap sort, it's a backwards insert sort followed by an in-place ordering reversal.

Array sorting backwards?

Okay so I have this function that takes an array and sorts it by using swap sort(?) however when I enter the numbers it sorts it from largest to smallest instead of smallest to largest.
I went through and did this to see what happens each step of the way however it all seems correct to me.
iterator = &first element
temp = smallest number
tempstore = &smallest number
val tempstore = val first element
val first element = val temp
If i change it so that
if(array[i] < *iterator)
becomes:
if(array[i] > *iterator)
it works perfectly but I don't understand this as now it is testing to see if the number is larger and I want smaller.
I know I should probably be using a vector but I am still a newbie and I am yet to learn them. Thanks for any help.
int *sort(int *array, int size)
{
int temp, *iterator, *tempStore;
for(int j = 0; j < size; j++)
{
iterator = &array[j];
for(int i = 0; i < size; i++)
{
if(array[i] < *iterator)
{
temp = array[i];
tempStore = &array[i];
*tempStore = *iterator;
*iterator = temp;
}
}
}
return array;
}
Your algorithm compares the first element of the array with all the subsequent, so when you use:
if(array[i] > *iterator)
you swap the first element with the ith element every time the ith element is greater than the first. So at the end of the first pass you have the greatest element in the first position. if you use the < operator, you get the smallest in front.
Then the second pass should compare the second element of the array with all the subsequent and so on, that's why i needs to start iterating from j + 1.
As you saw it is not straightforward to read and understand the code, moreover the algorithm itself is very poor (it looks like a selection sort with some extra swaps). You do not necessarily need to use a std::vector but you really should learn the standard library.
This is the C++11 way to do it:
#include <algorithm> //for std::sort
#include <iterator> //for std::begin/end
#include <functional> //for std::greater
int v[1000];
... fill v somehow ...
std::sort(std::begin(v), std::end(v), std::greater<int>());
Compact, clear and extremely fast.