MATLAB equivalent in c++ - c++

In MATLAB inorder to access the odd or even rows and columns of a matrix we use
A = M(1:2:end,1:2:end);
Is there an equivalent for this in C++? or How do i do this in C++.
Basically what i want to do is in matlab i have
A(1:2:end,1:2:end) = B(1:2:end,:);
A(2:2:end,2:2:end) = B(2:2:end,:);
I want to implement the same in C++

This is available only in a fairly obscure class, std::valarray. You need a std::gslice (Generalized slice) with stride {2,2} to access the std::valarray.

In C++ the for loop is constructed as follows
for (initial state; condition for termination; increment)
So if you are looking for the odd elements, you can:
for (int i = 0; i < size; i += 2),
whereas if you are looking for the even elements:
for (int i = 1; i < size; i += 2).
Where size depends if you are looping through the rows or columns. Take into account that because C++ arrays start at index 0, your odd elements will correspond to even indexes and your even elements will correspond to odd indexes.
Now, the answer: If you want to get the elements of a matrix, in C++ you must loop through the matrix with a for loop. You can modify the elements you access by modifying the increment property of the for loop.

for(int i= 0; i < rows/2; i++)
for(int j= 0; j < columns/2; j++)
A[i][j] = M[i*2][j*2];

Related

Sort array of n elements which has k sorted sections

What is the best way to sort an section-wise sorted array as depicted in the second image?
The problem is performing a quick-sort using Message Passing Interface. The solution is performing quick-sort on array sections obtained by using MPI_Scatter() then joining the sorted
pieces using MPI_Gather().
Problem is that the array as a whole is unsorted but sections of it are.
Merging the sub-sections similarly to this solution seems like the best way of sorting the array, but considering that the sub-arrays are already within a single array other sorting algorithms may prove better.
The inputs for a sort function would be the array, it's length and the number of equally sorted sub-sections.
A signature would look something like int* sort(int* array, int length, int sections);
The sections parameter can have any value between 1 and 25. The length parameter value is greater than 0, a multiple of sections and smaller than 2^32.
This is what I am currently using:
int* merge(int* input, int length, int sections)
{
int* sub_sections_indices = new int[sections];
int* result = new int[length];
int section_size = length / sections;
for (int i = 0; i < sections; i++) //initialisation
{
sub_sections_indices[i] = 0;
}
int min, min_index, current_index;
for (int i = 0; i < length; i++) //merging
{
min_index = 0;
min = INT_MAX;
for (int j = 0; j < sections; j++)
{
if (sub_sections_indices[j] < section_size)
{
current_index = j * section_size + sub_sections_indices[j];
if (input[current_index] < min)
{
min = input[current_index];
min_index = j;
}
}
}
sub_sections_indices[min_index]++;
result[i] = min;
}
return result;
}
Optimizing for performance
I think this answer that maintains a min-heap of the smallest item of each sub-array is the best way to handle arbitrary input. However, for small values of k, think somewhere between 10 and 100, it might be faster to implement the more naive solutions given in the question you linked to; while maintaining the min-heap is only O(log n) for each step, it might have a higher overhead for small values of n than the simple linear scan from the naive solutions.
All these solutions create a copy of the input, and they maintain O(k) state.
Optimizing for space
The only way to save space I see is to sort in-place. This will be a problem for the algorithms mentioned above. An in-place algorithm will have two swap elements, but any swaps will likely destroy the property that each sub-array is sorted, unless the larger of the swapped pair is re-sorted into the sub-array it is being swapped to, which will result in an O(n²) algorithm. So if you really do need to conserve memory, I think a regular in-place sorting algorithm would have to be used, which defeats your purpose.

Performance optimization nested loops

I am implementing a rather complicated code and in one of the critical sections I need to basically consider all the possible strings of numbers following a certain rule. The naive implementation to explain what I do would be such a nested loop implementation:
std::array<int,3> max = { 3, 4, 6};
for(int i = 0; i <= max.at(0); ++i){
for(int j = 0; j <= max.at(1); ++j){
for(int k = 0; k <= max.at(2); ++k){
DoSomething(i, j, k);
}
}
}
Obviously I actually need more nested for and the "max" rule is more complicated but the idea is clear I think.
I implemented this idea using a recursive function approach:
std::array<int,3> max = { 3, 4, 6};
std::array<int,3> index = {0, 0, 0};
int total_depth = 3;
recursive_nested_for(0, index, max, total_depth);
where
void recursive_nested_for(int depth, std::array<int,3>& index,
std::array<int,3>& max, int total_depth)
{
if(depth != total_depth){
for(int i = 0; i <= max.at(depth); ++i){
index.at(depth) = i;
recursive_nested_for(depth+1, index, max, total_depth);
}
}
else
DoSomething(index);
}
In order to save as much as possible I declare all the variable I use global in the actual code.
Since this part of the code takes really long is it possible to do anything to speed it up?
I would also be open to write 24 nested for if necessary to avoid the overhead at least!
I thought that maybe an approach like expressions templates to actually generate at compile time these nested for could be more elegant. But is it possible?
Any suggestion would be greatly appreciated.
Thanks to all.
The recursive_nested_for() is a nice idea. It's a bit inflexible as it is currently written. However, you could use std::vector<int> for the array dimensions and indices, or make it a template to handle any size std::array<>. The compiler might be able to inline all recursive calls if it knows how deep the recursion is, and then it will probably be just as efficient as the three nested for-loops.
Another option is to use a single for loop for incrementing the indices that need incrementing:
void nested_for(std::array<int,3>& index, std::array<int,3>& max)
{
while (index.at(2) < max.at(2)) {
DoSomething(index);
// Increment indices
for (int i = 0; i < 3; ++i) {
if (++index.at(i) >= max.at(i))
index.at(i) = 0;
else
break;
}
}
}
However, you can also consider creating a linear sequence that visits all possible combinations of the iterators i, j, k and so on. For example, with array dimensions {3, 4, 6}, there are 3 * 4 * 6 = 72 possible combinations. So you can have a single counter going from 0 to 72, and then "split" that counter into the three iterator values you need, like so:
for (int c = 0; c < 72; c++) {
int k = c % 6;
int j = (c / 6) % 4;
int i = c / 6 / 4;
DoSomething(i, j, k);
}
You can generalize this to as many dimensions as you want. Of course, the more dimensions you have, the higher the cost of splitting the linear iterator. But if your array dimensions are powers of two, it might be very cheap to do so. Also, it might be that you don't need to split it at all; for example if you are calculating the sum of all elements of a multidimensional array, you don't care about the actual indices i, j, k and so on, you just want to visit all elements once. If the array is layed out linearly in memory, then you just need a linear iterator.
Of course, if you have 24 nested for loops, you'll notice that the product of all the dimension's sizes will become a very large number. If it doesn't fit in a 32 bit integer, your code is going to be very slow. If it doesn't fit into a 64 bit integer anymore, it will never finish.

knapsack with weight only

if i had given the maximum weight say w=20 .and i had given a set on weights say m=[5,7,12,18] then how could i calculate the max possible weight that we can hold inside the maximum weight using the m. in this case the answer is 19.by adding 12+7=19. and my code is giving me 18.please help me in this.
int weight(int W, vector<int> &m) {
int current_weight = 0;
int temp;
for (int i = 0; i < w.size(); i++) {
for (int j = i + 1; j < m.size(); j++) {
if (m[i] < m[j]) {
temp = m[j];
m[j] = m[i];
m[i] = temp;
}
}
}
for (size_t i = 0; i < m.size(); ++i) {
if (current_weight + m[i] <= W) {
current_weight += m[i];
}
}
return current_weight;
}
The problem you describe looks more like a version of the maximum subset sum problem. Basically, there is nothing wrong with your implementaion in the first place; apparently you have correctly implemented a greedy algorithm for the problem. That being said, this algorithm fails to generate an optimal solution for every input. The instance you have found is such an example.
However, the problem can be solved using a different approach termed dynamic programming, which can be seen as form of organization of a recursive formulation of the solution.
Let m = { m_1, ... m_n } be the set of positive item sizes and W a capscity constraint where n is a positive integer. Organize an array A[n][W] as a state space where
A[i][j] = the maximum weight at most j attainable for the set of items
with indices from 0 to i if such a solution exists and
minus infinity otherwise
for each i in {1,...,n} and j in {1,...,W}; for ease of presentation, suppose that A has a value of minus infinity everywhere else. Note that for each such i and j the recurrence relation
A[i][j] = min { A[i-1][W-m_j] + m_j, A[i-1][W] }
holds, where the first case corresponds to selecting item i into the solution and the second case corresponds to not selecting item i into the solution.
Next, organize a loop which fills this table in an order of increasing values of i and j, where the initialization for i = 1 has to be done before. After filling the state space, the maximum feasible value in the last colum
max{ A[n][j] : j in {1,...,W}, A[n][j] is not minus infinity }
yields the optimal solution. If the associated set of items is also desired, either some backtracking or suitable auxiliary data structures have to be used.
So it feels like this solution can be a trivial change to the commonly existing 0-1 knapsack problem, by passing the copy of the weight array as the value array.

Treats for the cows - bottom up dynamic programming

The full problem statement is here. Suppose we have a double ended queue of known values. Each turn, we can take a value out of one or the other end and the values still in the queue increase as value*turns. The goal is to find maximum possible total value.
My first approach was to use straightforward top-down DP with memoization. Let i,j denote starting, ending indexes of "subarray" of array of values A[].
A[i]*age if i == j
f(i,j,age) =
max(f(i+1,j,age+1) + A[i]*age , f(i,j-1,age+1) + A[j]*age)
This works, however, proves to be too slow, as there are superfluous stack calls. Iterative bottom-up should be faster.
Let m[i][j] be the maximum reachable value of the "subarray" of A[] with begin/end indexes i,j. Because i <= j, we care only about the lower triangular part.
This matrix can be built iteratively using the fact that m[i][j] = max(m[i-1][j] + A[i]*age, m[i][j-1] + A[j]*age), where age is maximum on the diagonal (size of A[] and linearly decreases as A.size()-(i-j).
My attempt at implementation meets with bus error.
Is the described algorithm correct? What is the cause for the bus error?
Here is the only part of the code where the bus error might occur:
for(T j = 0; j < num_of_treats; j++) {
max_profit[j][j] = treats[j]*num_of_treats;
for(T i = j+1; i < num_of_treats; i++)
max_profit[i][j] = max( max_profit[i-1][j] + treats[i]*(num_of_treats-i+j),
max_profit[i][j-1] + treats[j]*(num_of_treats-i+j));
}
for(T j = 0; j < num_of_treats; j++) {
Inside this loop, j is clearly a valid index into the array max_profit. But you're not using just j.
The bus error is caused by trying to access array via negative index when j=0 and i=1 as I should have noticed during the debugging. The algorithm is wrong as well. First, the relationship used to construct the max_profit[][] array should is
max_profit[i][j] = max( max_profit[i+1][j] + treats[i]*(num_of_treats-i+j),
max_profit[i][j-1] + treats[j]*(num_of_treats-i+j));
Second, the array must by filled diagonally, so that max_profit[i+1][j] and max_profit[i][j-1] is already computed with exception of the main diagonal.
Third, the data structure chosen is extremely inefficient. I am using only half of the space allocated for max_profit[][]. Plus, at each iteration, I only need the last computed diagonal. An array of size num_of_treats should suffice.
Here is a working code using this improved algorithm. I really like it. I even used bit operators for the first time.

Comparing all elements of an array

For a program that I am writing for fun (one that finds the Highest Common Factor and the Lowest Common Multiple for you); I've come across some difficulty.
I have two arrays that contain 14 numbers. To find the Lowest Common Multiple of all the numbers, I need to compare every element in each array. So far I've got this test:
for(int i = 0; i < C_I_14; i++)
{
for(int j = 0; j < C_I_14; j++)
{
if(array[i] == arr[j])
{
tesst[i] = array[i];
}
}
}
(where C_I_14 = 14)
The thing is, there are endless amounts of things that could go wrong with:
tesst[i] = array[i]
So, can anyone help me sort out my little algorithm?
Sort each of your input arrays, then get the intersection using std::set_intersection.
If the ordering matters, you will find
std::mismatch
std::lexicographical_compare
quite useful
Otherwise, look at
std::sort (!! important) followed by
std::set_intersection