Varying initializer in a 'for loop' in C++ - c++

int i = 0;
for(; i<size-1; i++) {
int temp = arr[i];
arr[i] = arr[i+1];
arr[i+1] = temp;
}
Here I started with the fist position of array. What if after the loop I need to execute the for loop again where the for loop starts with the next position of array.
Like for first for loop starts from: Array[0]
Second iteration: Array[1]
Third iteration: Array[2]
Example:
For array: 1 2 3 4 5
for i=0: 2 1 3 4 5, 2 3 1 4 5, 2 3 4 1 5, 2 3 4 5 1
for i=1: 1 3 2 4 5, 1 3 4 2 5, 1 3 4 5 2 so on.

You can nest loops inside each other, including the ability for the inner loop to access the iterator value of the outer loop. Thus:
for(int start = 0; start < size-1; start++) {
for(int i = start; i < size-1; i++) {
// Inner code on 'i'
}
}
Would repeat your loop with an increasing start value, thus repeating with a higher initial value for i until you're gone through your list.

Suppose you have a routine to generate all possible permutations of the array elements for a given length n. Suppose the routine, after processing all n! permutations, leaves the n items of the array in their initial order.
Question: how can we build a routine to make all possible permutations of an array with (n+1) elements?
Answer:
Generate all permutations of the initial n elements, each time process the whole array; this way we have processed all n! permutations with the same last item.
Now, swap the (n+1)-st item with one of those n and repeat permuting n elements ā€“ we get another n! permutations with a new last item.
The n elements are left in their previous order, so put that last item back into its initial place and choose another one to put at the end of an array. Reiterate permuting n items.
And so on.
Remember, after each call the routine leaves the n-items array in its initial order. To retain this property at n+1 we need to make sure the same element gets finally placed at the end of an array after the (n+1)-st iteration of n! permutations.
This is how you can do that:
void ProcessAllPermutations(int arr[], int arrLen, int permLen)
{
if(permLen == 1)
ProcessThePermutation(arr, arrLen); // print the permutation
else
{
int lastpos = permLen - 1; // last item position for swaps
for(int pos = lastpos; pos >= 0; pos--) // pos of item to swap with the last
{
swap(arr[pos], arr[lastpos]); // put the chosen item at the end
ProcessAllPermutations(arr, arrLen, permLen - 1);
swap(arr[pos], arr[lastpos]); // put the chosen item back at pos
}
}
}
and here is an example of the routine running: https://ideone.com/sXp35O
Note, however, that this approach is highly ineffective:
It may work in a reasonable time for very small input size only. The number of permutations is a factorial function of the array length, and it grows faster than exponentially, which makes really BIG number of tests.
The routine has no short return. Even if the first or second permutation is the correct result, the routine will perform all the rest of n! unnecessary tests, too. Of course one can add a return path to break iteration, but that would make the code somewhat ugly. And it would bring no significant gain, because the routine will have to make n!/2 test on average.
Each generated permutation appears deep in the last level of the recursion. Testing for a correct result requires making a call to ProcessThePermutation from within ProcessAllPermutations, so it is difficult to replace the callee with some other function. The caller function must be modified each time you need another method of testing / procesing / whatever. Or one would have to provide a pointer to a processing function (a 'callback') and push it down through all the recursion, down to the place where the call will happen. This might be done indirectly by a virtual function in some context object, so it would look quite nice ā€“ but the overhead of passing additional data down the recursive calls can not be avoided.
The routine has yet another interesting property: it does not rely on the data values. Elements of the array are never compared. This may sometimes be an advantage: the routine can permute any kind of objects, even if they are not comparable. On the other hand it can not detect duplicates, so in case of equal items it will make repeated results. In a degenerate case of all n equal items the result will be n! equal sequences.
So if you ask how to generate all permutations to detect a sorted one, I must answer: DON'T.
Do learn effective sorting algorithms instead.

Related

Every sum possibilities of elements

From a given array (call it numbers[]), i want another array (results[]) which contains all sum possibilities between elements of the first array.
For example, if I have numbers[] = {1,3,5}, results[] will be {1,3,5,4,8,6,9,0}.
there are 2^n possibilities.
It doesn't matter if a number appears two times because results[] will be a set
I did it for sum of pairs or triplet, and it's very easy. But I don't understand how it works when we sum 0, 1, 2 or n numbers.
This is what I did for pairs :
std::unordered_set<int> pairPossibilities(std::vector<int> &numbers) {
std::unordered_set<int> results;
for(int i=0;i<numbers.size()-1;i++) {
for(int j=i+1;j<numbers.size();j++) {
results.insert(numbers.at(i)+numbers.at(j));
}
}
return results;
}
Also, assuming that the numbers[] is sorted, is there any possibility to sort results[] while we fill it ?
Thanks!
This can be done with Dynamic Programming (DP) in O(n*W) where W = sum{numbers}.
This is basically the same solution of Subset Sum Problem, exploiting the fact that the problem has optimal substructure.
DP[i, 0] = true
DP[-1, w] = false w != 0
DP[i, w] = DP[i-1, w] OR DP[i-1, w - numbers[i]]
Start by following the above solution to find DP[n, sum{numbers}].
As a result, you will get:
DP[n , w] = true if and only if w can be constructed from numbers
Following on from the Dynamic Programming answer, You could go with a recursive solution, and then use memoization to cache the results, top-down approach in contrast to Amit's bottom-up.
vector<int> subsetSum(vector<int>& nums)
{
vector<int> ans;
generateSubsetSum(ans,0,nums,0);
return ans;
}
void generateSubsetSum(vector<int>& ans, int sum, vector<int>& nums, int i)
{
if(i == nums.size() )
{
ans.push_back(sum);
return;
}
generateSubsetSum(ans,sum + nums[i],nums,i + 1);
generateSubsetSum(ans,sum,nums,i + 1);
}
Result is : {9 4 6 1 8 3 5 0} for the set {1,3,5}
This simply picks the first number at the first index i adds it to the sum and recurses. Once it returns, the second branch follows, sum, without the nums[i] added. To memoize this you would have a cache to store sum at i.
I would do something like this (seems easier) [I wanted to put this in comment but can't write the shifting and removing an elem at a time - you might need a linked list]
1 3 5
3 5
-----
4 8
1 3 5
5
-----
6
1 3 5
3 5
5
------
9
Add 0 to the list in the end.
Another way to solve this is create a subset arrays of vector of elements then sum up each array's vector's data.
e.g
1 3 5 = {1, 3} + {1,5} + {3,5} + {1,3,5} after removing sets of single element.
Keep in mind that it is always easier said than done. A single tiny mistake along the implemented algorithm would take a lot of time in debug to find it out. =]]
There has to be a binary chop version, as well. This one is a bit heavy-handed and relies on that set of answers you mention to filter repeated results:
Split the list into 2,
and generate the list of sums for each half
by recursion:
the minimum state is either
2 entries, with 1 result,
or 3 entries with 3 results
alternatively, take it down to 1 entry with 0 results, if you insist
Then combine the 2 halves:
All the returned entries from both halves are legitimate results
There are 4 additional result sets to add to the output result by combining:
The first half inputs vs the second half inputs
The first half outputs vs the second half inputs
The first half inputs vs the second half outputs
The first half outputs vs the second half outputs
Note that the outputs of the two halves may have some elements in common, but they should be treated separately for these combines.
The inputs can be scrubbed from the returned outputs of each recursion if the inputs are legitimate final results. If they are they can either be added back in at the top-level stage or returned by the bottom level stage and not considered again in the combining.
You could use a bitfield instead of a set to filter out the duplicates. There are reasonably efficient ways of stepping through a bitfield to find all the set bits. The max size of the bitfield is the sum of all the inputs.
There is no intelligence here, but lots of opportunity for parallel processing within the recursion and combine steps.

How to erase elements more efficiently from a vector or set?

Problem statement:
Input:
First two inputs are integers n and m. n is the number of knights fighting in the tournament (2 <= n <= 100000, 1 <= m <= n-1). m is the number of battles that will take place.
The next line contains n power levels.
The next m lines contain two integers l and r, indicating the range of knight positions to compete in the ith battle.
After each battle, all nights apart from the one with the highest power level will be eliminated.
The range for each battle is given in terms of the new positions of the knights, not the original positions.
Output:
Output m lines, the ith line containing the original positions (indices) of the knights from that battle. Each line is in ascending order.
Sample Input:
8 4
1 0 5 6 2 3 7 4
1 3
2 4
1 3
0 1
Sample Output:
1 2
4 5
3 7
0
Here is a visualisation of this process.
1 2
[(1,0),(0,1),(5,2),(6,3),(2,4),(3,5),(7,6),(4,7)]
-----------------
4 5
[(1,0),(6,3),(2,4),(3,5),(7,6),(4,7)]
-----------------
3 7
[(1,0),(6,3),(7,6),(4,7)]
-----------------
0
[(1,0),(7,6)]
-----------
[(7,6)]
I have solved this problem. My program produces the correct output, however, it is O(n*m) = O(n^2). I believe that if I erase knights more efficiently from the vector, efficiency can be increased. Would it be more efficient to erase elements using a set? I.e. erase contiguous segments rather that individual knights. Is there an alternative way to do this that is more efficient?
#define INPUT1(x) scanf("%d", &x)
#define INPUT2(x, y) scanf("%d%d", &x, &y)
#define OUTPUT1(x) printf("%d\n", x);
int main(int argc, char const *argv[]) {
int n, m;
INPUT2(n, m);
vector< pair<int,int> > knights(n);
for (int i = 0; i < n; i++) {
int power;
INPUT(power);
knights[i] = make_pair(power, i);
}
while(m--) {
int l, r;
INPUT2(l, r);
int max_in_range = knights[l].first;
for (int i = l+1; i <= r; i++) if (knights[i].first > max_in_range) {
max_in_range = knights[i].first;
}
int offset = l;
int range = r-l+1;
while (range--) {
if (knights[offset].first != max_in_range) {
OUTPUT1(knights[offset].second));
knights.erase(knights.begin()+offset);
}
else offset++;
}
printf("\n");
}
}
Well, removing from vector wouldn't be efficient for sure. Removing from set, or unordered set would be more effective (use iterators instead of indexes).
Yet the problem will still remain O(n^2), because you have two nested whiles running n*m times.
--EDIT--
I believe I understand the question now :)
First let's calculate the complexity of your code above. Your worst case would be the case that max range in all battles is 1 (two nights for each battle) and the battles are not ordered with respect to the position. Which means you have m battles (in this case m = n-1 ~= O(n))
The first while loop runs n times
For runs for once every time which makes it n*1 = n in total
The second while loop runs once every time which makes it n again.
Deleting from vector means n-1 shifts that makes it O(n).
Thus with the complexity of the vector total complexity is O(n^2)
First of all, you don't really need the inner for loop. Take the first knight as the max in range, compare the rest in the range one-by-one and remove the defeated ones.
Now, i believe it can be done in O(nlogn) with using std::map. The key to the map is the position and the value is the level of the knight.
Before proceeding, finding and removing an element in map is logarithmic, iterating is constant.
Finally, your code should look like:
while(m--) // n times
strongest = map.find(first_position); // find is log(n) --> n*log(n)
for (opponent = next of strongest; // this will run 1 times, since every range is 1
opponent in range;
opponent = next opponent) // iterating is constant
// removing from map is log(n) --> n * 1 * log(n)
if strongest < opponent
remove strongest, opponent is the new strongest
else
remove opponent, (be careful to remove it after iterating to next)
Ok, now the upper bound would be O(2*nlogn) = O(nlogn). If the ranges increases, that makes the run time of upper loop decrease but increases the number of remove operations. I'm sure the upper bound won't change, let's make it a homework for you to calculate :)
A solution with a treap is pretty straightforward.
For each query, you need to split the treap by implicit key to obtain the subtree that corresponds to the [l, r] range (it takes O(log n) time).
After that, you can iterate over the subtree and find the knight with the maximum strength. After that, you just need to merge the [0, l) and [r + 1, end) parts of the treap with the node that corresponds to this knight.
It's clear that all parts of the solution except for the subtree traversal and printing work in O(log n) time per query. However, each operation reinserts only one knight and erase the rest from the range, so the size of the output (and the sum of sizes of subtrees) is linear in n. So the total time complexity is O(n log n).
I don't think you can solve with standard stl containers because there'no standard container that supports getting an iterator by index quickly and removing arbitrary elements.

How to find un-ordered numbers (lineal search)

A list partially ordered of n numbers is given and I have to find those numbers that does not follow the order (just find them and count them).
There are no repeated numbers.
There are no negative numbers.
MAX = 100000 is the capacity of the list.
n, the number of elements in the list, is given by the user.
Example of two lists:
1 2 5 6 3
1 6 2 9 7 4 8 10 13
For the first list the output is 2 since 5 and 6 should be both after 3, they are unordered; for the second the output is 3 since 6, 9 and 7 are out of order.
The most important condition in this problem: do the searching in a linear way O(n) or being quadratic the worst case.
Here is part of the code I developed (however it is no valid since it is a quadratic search).
"unordered" function compares each element of the array with the one given by "minimal" function; if it finds one bigger than the minimal, that element is unordered.
int unordered (int A[MAX], int n)
int cont = 0;
for (int i = 0; i < n-1; i++){
if (A[i] > minimal(A, n, i+1)){
count++;
}
}
return count;
"minimal" function takes the minimal of all the elements in the list between the one which is being compared in "unordered" function and the last of the list. i < elements <= n . Then, it is returned to be compared.
int minimal (int A[MAX], int n, int index)
int i, minimal = 99999999;
for (i = index; i < n; i++){
if (A[i] <= minimo)
minimal = A[i];
}
return minimal;
How can I do it more efficiently?
Start on the left of the list and compare the current number you see with the next one. Whenever the next is smaller than the current remove the current number from the list and count one up. After removing a number at index 'n' set your current number to index 'n-1' and go on.
Because you remove at most 'n' numbers from the list and compare the remaining in order, this Algorithmus in O(n).
I hope this helps. I must admit though that the task of finding numbers that are out of of order isn't all that clear.
If O(n) space is no problem, you can first do a linear run (backwards) over the array and save the minimal value so far in another array. Instead of calling minimal you can then look up the minimum value in O(1) and your approach works in O(n).
Something like this:
int min[MAX]; //or: int *min = new int[n];
min[n-1] = A[n-1];
for(int i = n-2; i >= 0; --i)
min[i] = min(A[i], min[i+1]);
Can be done in O(1) space if you do the first loop backwards because then you only need to remember the current minimum.
Others have suggested some great answers, but I have an extra way you can think of this problem. Using a stack.
Here's how it helps: Push the leftmost element in the array onto the stack. Keep doing this until the element you are currently at (on the array) is less than top of the stack. While it is, pop elements and increment your counter. Stop when it is greater than top of the stack and push it in. In the end, when all array elements are processed you'll get the count of those that are out of order.
Sample run: 1 5 6 3 7 4 10
Step 1: Stack => 1
Step 2: Stack => 1 5
Step 3: Stack => 1 5 6
Step 4: Now we see 3 is in. While 3 is less than top of stack, pop and increment counter. We get: Stack=> 1 3 -- Count = 2
Step 5: Stack => 1 3 7
Step 6: We got 4 now. Repeat same logic. We get: Stack => 1 3 4 -- Count = 3
Step 7: Stack => 1 3 4 10 -- Count = 3. And we're done.
This should be O(N) for time and space. Correct me if I'm wrong.

Is this code a bubble sorting program?

I made a simple bubble sorting program, the code works but I do not know if its correct.
What I understand about the bubble sorting algorithm is that it checks an element and the other element beside it.
#include <iostream>
#include <array>
using namespace std;
int main()
{
int a, b, c, d, e, smaller = 0,bigger = 0;
cin >> a >> b >> c >> d >> e;
int test1[5] = { a,b,c,d,e };
for (int test2 = 0; test2 != 5; ++test2)
{
for (int cntr1 = 0, cntr2 = 1; cntr2 != 5; ++cntr1,++cntr2)
{
if (test1[cntr1] > test1[cntr2]) /*if first is bigger than second*/{
bigger = test1[cntr1];
smaller = test1[cntr2];
test1[cntr1] = smaller;
test1[cntr2] = bigger;
}
}
}
for (auto test69 : test1)
{
cout << test69 << endl;
}
system("pause");
}
It is a bubblesort implementation. It just is a very basic one.
Two improvements:
the outerloop iteration may be one shorter each time since you're guaranteed that the last element of the previous iteration will be the largest.
when no swap is done during an iteration, you're finished. (which is part of the definition of bubblesort in wikipedia)
Some comments:
use better variable names (test2?)
use the size of the container or the range, don't hardcode 5.
using std::swap() to swap variables leads to simpler code.
Here is a more generic example using (random access) iterators with my suggested improvements and comments and here with the improvement proposed by Yves Daoust (iterate up to last swap) with debug-prints
The correctness of your algorithm can be explained as follows.
In the first pass (inner loop), the comparison T[i] > T[i+1] with a possible swap makes sure that the largest of T[i], T[i+1] is on the right. Repeating for all pairs from left to right makes sure that in the end T[N-1] holds the largest element. (The fact that the array is only modified by swaps ensures that no element is lost or duplicated.)
In the second pass, by the same reasoning, the largest of the N-1 first elements goes to T[N-2], and it stays there because T[N-1] is larger.
More generally, in the Kth pass, the largest of the N-K+1 first element goes to T[N-K], stays there, and the next elements are left unchanged (because they are already increasing).
Thus, after N passes, all elements are in place.
This hints a simple optimization: all elements following the last swap in a pass are in place (otherwise the swap wouldn't be the last). So you can record the position of the last swap and perform the next pass up to that location only.
Though this change doesn't seem to improve a lot, it can reduce the number of passes. Indeed by this procedure, the number of passes equals the largest displacement, i.e. the number of steps an element has to take to get to its proper place (elements too much on the right only move one position at a time).
In some configurations, this number can be small. For instance, sorting an already sorted array takes a single pass, and sorting an array with all elements swapped in pairs takes two. This is an improvement from O(NĀ²) to O(N) !
Yes. Your code works just like Bubble Sort.
Input: 3 5 1 8 2
Output after each iteration:
3 1 5 2 8
1 3 2 5 8
1 2 3 5 8
1 2 3 5 8
1 2 3 5 8
1 2 3 5 8
Actually, in the inner loop, we don't need to go till the end of the array from the second iteration onwards because the heaviest element of the previous iteration is already at the last. But that doesn't better the time complexity much. So, you are good to go..
Small Informal Proof:
The idea behind your sorting algorithm is that you go though the array of values (left to right). Let's call it a pass. During the pass pairs of values are checked and swapped to be in correct order (higher right).
During first pass the maximum value will be reached. When reached, the max will be higher then value next to it, so they will be swapped. This means that max will become part of next pair in the pass. This repeats until pass is completed and max moves to the right end of the array.
During second pass the same is true for the second highest value in the array. Only difference is it will not be swapped with the max at the end. Now two most right values are correctly set.
In every next pass one value will be sorted out to the right.
There are N values and N passes. This means that after N passes all N values will be sorted like:
{kth largest, (k-1)th largest,...... 2nd largest, largest}
No it isn't. It is worse. There is no point whatsoever in the variable cntr1. You should be using test1 here, and you should be referring to one of the many canonical implementations of bubblesort rather than trying to make it up for yourself.

How to convert a simple computer algorithm into a mathematical function in order to determine the big o notation?

In my University we are learning Big O Notation. However, one question that I have in light of big o notation is, how do you convert a simple computer algorithm, say for example, a linear searching algorithm, into a mathematical function, say for example 2n^2 + 1?
Here is a simple and non-robust linear searching algorithm that I have written in c++11. Note: I have disregarded all header files (iostream) and function parameters just for simplicity. I will just be using basic operators, loops, and data types in order to show the algorithm.
int array[5] = {1,2,3,4,5};
// Variable to hold the value we are searching for
int searchValue;
// Ask the user to enter a search value
cout << "Enter a search value: ";
cin >> searchValue;
// Create a loop to traverse through each element of the array and find
// the search value
for (int i = 0; i < 5; i++)
{
if (searchValue == array[i])
{
cout << "Search Value Found!" << endl;
}
else
// If S.V. not found then print out a message
cout << "Sorry... Search Value not found" << endl;
In conclusion, how do you translate an algorithm into a mathematical function so that we can analyze how efficient an algorithm really is using big o notation? Thanks world.
First, be aware that it's not always possible to analyze the time complexity of an algorithm, there are some where we do not know their complexity, so we have to rely on experimental data.
All of the methods imply to count the number of operations done. So first, we have to define the cost of basic operations like assignation, memory allocation, control structures (if, else, for, ...). Some values I will use (working with different models can provide different values):
Assignation takes constant time (ex: int i = 0;)
Basic operations take constant time (+ - * āˆ•)
Memory allocation is proportional to the memory allocated: allocating an array of n elements takes linear time.
Conditions take constant time (if, else, else if)
Loops take time proportional to the number of time the code is ran.
Basic analysis
The basic analysis of a piece of code is: count the number of operations for each line. Sum those cost. Done.
int i = 1;
i = i*2;
System.out.println(i);
For this, there is one operation on line 1, one on line 2 and one on line 3. Those operations are constant: This is O(1).
for(int i = 0; i < N; i++) {
System.out.println(i);
}
For a loop, count the number of operations inside the loop and multiply by the number of times the loop is ran. There is one operation on the inside which takes constant time. This is ran n times -> Complexity is n * 1 -> O(n).
for (int i = 0; i < N; i++) {
for (int j = i; j < N; j++) {
System.out.println(i+j);
}
}
This one is more tricky because the second loop starts its iteration based on i. Line 3 does 2 operations (addition + print) which take constant time, so it takes constant time. Now, how much time line 3 is ran depends on the value of i. Enumerate the cases:
When i = 0, j goes from 0 to N so line 3 is ran N times.
When i = 1, j goes from 1 to N so line 3 is ran N-1 times.
...
Now, summing all this we have to evaluate N + N-1 + N-2 + ... + 2 + 1. The result of the sum is N*(N+1)/2 which is quadratic, so complexity is O(n^2).
And that's how it works for many cases: count the number of operations, sum all of them, get the result.
Amortized time
An important notion in complexity theory is amortized time. Let's take this example: running operation() n times:
for (int i = 0; i < N; i++) {
operation();
}
If one says that operation takes amortized constant time, it means that running n operations took linear time, even though one particular operation may have taken linear time.
Imagine you have an empty array of 1000 elements. Now, insert 1000 elements into it. Easy as pie, every insertion took constant time. And now, insert another element. For that, you have to create a new array (bigger), copy the data from the old array into the new one, and insert the element 1001. The 1000 first insertions took constant time, the last one took linear time. In this case, we say that all insertions took amortized constant time because the cost of that last insertion was amortized by the others.
Make assumptions
In some other cases, getting the number of operations require to make hypothesises. A perfect example for this is insertion sort, because it is simple and it's running time depends of how is the data ordered.
First, we have to make some more assumptions. Sorting involves two elementary operations, that is comparing two elements and swapping two elements. Here I will consider both of them to take constant time. Here is the algorithm where we want to sort array a:
for (int i = 0; i < a.length; i++) {
int j = i;
while (j > 0 && a[j] < a[j-1]) {
swap(a, i, j);
j--;
}
}
First loop is easy. No matter what happens inside, it will run n times. So the running time of the algorithm is at least linear. Now, to evaluate the second loop we have to make assumptions about how the array is ordered. Usually, we try to define the best-case, worst-case and average case running time.
Best-case: We do never enter the while loop. Is this possible ? Yes. If a is a sorted array, then a[j] > a[j-1] no matter what j is. Thus, we never enter the second loop. So, what operations are done in this case is the assignation on line 2 and the evaluation of the condition on line 3. Both take constant time. Because of the first loop, those operations are ran n times. Then in the best case, insertion sort is linear.
Worst-case: We leave the while loop only when we reach the beginning of the array. That is, we swap every element all the way to the 0 index, for every element in the array. It corresponds to an array sorted in reverse order. In this case, we end up with the first element being swapped 0 times, element 2 is swapped 1 times, element 3 is swapped 2 times, etc up to element n being swapped n-1 times. We already know the result of this: worst-case insertion is quadratic.
Average case: For the average case, we assume the items are randomly distributed inside the array. If you're interested in the maths, it involves probabilities and you can find the proof in many places. Result is quadratic.
Conclusion
Those were basics about analyzing the time complexity of an algorithm. The cases were easy, but there are some algorithms which aren't as nice. For example, you can look at the complexity of the pairing heap data structure which is much more complex.