How do i know number of cells between two points in array? - c++

i have 2D array and give tow points p1(x1,y1) and p2(x2,y2) , is that any way to know number of cells between them ?

For a point p(i,j), it's position in a matrix is equal to i*width+j where width is the width of the matrix. Hence the number of cells between two elemets is abs((i1*width+j1) - (i2*width+j2)).

Related

C++ Search a 2D vector for elements surrounding a chosen element?

Stuck here in my assignment. I am working with 2D Vectors. What my professor wants us to do is write a program that has the user enter a size of a matrix (N X N) and print the matrix with random 1's and 0's, which I have done.
What I am stuck is that he wants to find "nonzero" elements around a certain element. For instance:
0 0 0
0 1 1
1 1 1
Now the user is asked to type in a row and column to (to locate an element) then search for nonzero values adjacent to that element. So if rows and columns start at 0, row 1 and column 1 holds the value "1" (the center of the matrix) and has 4 adjacent nonzero elements. I am not quite sure where to go from here. Would I use the find code? I am not sure how to limit that to the adjacent locations of one element.
Thank you
Hint: if you want to look at the adjacent elements, you can just shift each index by one position. For example, if the given (row, column) is (1, 1), the adjacent positions are (0, 1), (2, 1), (1, 0), (1, 2). You should make sure your code only reads indices in the range (0..N, 0..N).
This is your assignment and you should do your best to finish it. Go for it and make us proud!

Vector on upper half of hemisphere

I have a normal vector N, which defines the upper half of an hemisphere and an function, which creates random points P on the hemisphere.
Now I want to know, if the randomly choosen point is on the upper half. Is it save to assume, if the length of N+P is greater or equal 1, P is on the upper half, or is there a better way to calculate this in glm?
#Raxvan gave a perfectly valid answer how to do it properly: use dot product and check if it is positive (non-negative).
Answering your original idea that you also re-stated in the comments:
if the length of N+P is greater or equal 1, P is on the upper half
this is an incorrect way. Yes this test returns "true" for all the correct points but it does not filter out all the incorrect points. For example, consider N is (0,0,1) (i.e. vector along Z-axis) and P is (0.99, 0, -0.14) (i.e. a vector just a bit below the XY-plane and at the far end along the X-axis). Obviously P is not in the "upper hemisphere" but N + P is (0.99, 0, 0.86) and its length is obviously more than 1.

Efficient data structure for sparse data lookup

Situation:
Given some points with coordinate (x, y)
Range 0 < x < 100,000,000 and 0 < y <100,000,000
I have to find smallest square which contains at least N no of points on its edge and inside it.
I used vector to store coordinates and searched all squares with side length minLength upto side length maxLength (Appling Brute Force in relevant space)
struct Point
{
int x;
int y;
};
vector<Point> P;
int minLength = sqrt(N) - 1;
int maxLength = 0;
// bigx= largest x coordinate of any point
// bigy= largest y coordinate of any point
// smallx= smallest x coordinate of any point
// smally= smallest y coordinate of any point
(bigx - smallx) < (bigy - smally) ? maxLength = (bigx - smallx) : maxLength = (bigy - smally);
For each square I looked up, traversed through complete vector to see if at least N points are on its edge and inside it.
This was quite time inefficient.
Q1. What data structure should I use to improve time efficiency without changing Algorithm I used?
Q2. Efficient Algorithm for this problem?
There are points on 2 opposite edges - if not, you could shrink the square by 1 and still contain the same number of points. That means the possible coordinates of the edges are limited to those of the input points. The input points are probably not on the corners, though. (For a minimum rectangle, there would be points on all 4 edges as you can shrink one dimension without altering the other)
The next thing to realize is that each point divides the plane in 4 quadrants, and each quadrant contains a number of points. (These can add up to more than the total number of points as the quadrants have one pixel overlap). Lets say that NW(p) is the number of points to the northwest of point p, i.e. those that have x>=px and y>=py. Then the number of points in a square is NW(bottomleft) + NW(topright) - NW(bottomright) - NW(topleft).
It's fairly easy to calculate NW(p) for all input points. Sort them by x and for equal x by y. The most northwestern point has NW(p)==0. The next point can have NW(p)==1 if it's to the southeast of the first point, else it has NW(p)==0. It's also useful to keep track of SW(p) in this stage, as you're working through the points from west to east and they're therefore not sorted north to south. Having calculated NW(p), you can determine the number of points in a square S in O(1)
Recall that the square size is restricted by by the need to have points on opposite edges. Assume the points are on the left (western) and right edge - you still have the points sorted by x order. Start by assuming the left edge is at your leftmost x coordinate, and see what the right edge must be to contain N points. Now shift the left edge to the next x coordinate and find a new right edge (and thus a new square). Do this until the right edge of the square is the rightmost point.
Its also possible that the square is constrained in y direction. Just sort the points in y direction and repeat, then choose the smallest square between the two outcomes.
Since you're running linearly through the points in x and y direction, that part is just O(N) and the dominant factor is the O(N log N) sort.
Look at http://en.wikipedia.org/wiki/Space_partitioning for algorithms that use the Divide-and-Conquer technique to solve this. This is definitely solvable in Polynomial time.
Another variant algorithms can be on the following lines.
Generate a vornoi-diagram on the points to get neighbour information. [ O(nlog(n)) ]
Now use Dynamic Programming, the DP will be similar to the problem of finding the maximum subarray in a 2D array. Here instead of the sum of numbers, you will keep count of points before it.
2.a Essentially a recursion similar to this will hold. [ O(n) ]
Number of elements in square from (0,0) to (x,y ) = (Number of elems
from square (0,0 to ((x-1),y))+ (Number of elems in square 0,0 - ( x, y-1))
- (Number of elems in (0,0)-((x-1),(y-1)))
Your recurrence will have to change for all the points on its neighbourhood and to the left and above, instead of just the points above and left as above.
Once the DP is ready, you can query the points in a sqare in O(1).
Another O(n^2) loop to find from all possible combinations and find the least square.
You can even greedily start from the smallest squares first, that way you can end your search as soon as you find a suitable square..
The rtree allows spatial searching, but doesn't have stl implementation, although sqlite would allow binding. This can answer "get all points within range", "k nearest neighbours"
Finding a region which has the most dense data, is a problem similar to clustering.
Iterating over the points and finding the N nearest entries to each point. Then generate the smallest circle - centre would be the Max(x) - min(x), Max(y) - min(y). A square can be formed which contains all the neighbours, and would be somewhere between 2r length and 2sqrt(r) length sides compared to circle.
Time taken O(x) to build structure
O(X N log(X)) to search for smallest cluster
Note: There are a bunch of answers for your second question (which will probably reap bigger benefits), but I'm only referring to your first one, i.e. what data to use without changing the algorithm.
There, I think that your choice using a vector is already pretty good, because in general vectors offer the best payload/overhead ratio and also the fastest iteration. In order to find out specific bottlenecks, use a profiler, otherwise you are only guessing. With large vectors, there are a few things to avoid though:
Overallocation, this wastes space.
Underallocation, this causes copying when the vector is grown to the necessary size.
Copying.

Count submatrices with all ones

I am given a N*M Grid of 0s ans 1s.I need to find number of those submatrices of size A*B which have all 1s inside them.
Like suppose i have a grid of 2*6
The Grid is :
0 1 1 1 1 0
0 1 1 1 1 1
Now if say i want to find submatrices of size 2*3
Then here answer is 2.
EDIT: The following hints assume that by "submatrix" you meant "the intersection of a contiguous subset of rows and a contiguous subset of columns". (Usually a submatrix is allowed to skip rows and columns.)
I believe this is a homework question, so I'll just provide a hint instead of a full answer.
Suppose there was a way to efficiently calculate, for each cell (i, j), whether it was the rightmost cell of a run of at least m 1s in a row. How would that help?
Another hint: any given cell (i, j) is either the bottom-rightmost corner of some N*M grid of 1s, or it isn't.

compute sum of value in an rectangle area of array

I have a very big array of many value and store it in an row-major 1d array.
ex:
1 2 3
4 5 6
will be store in int* array = {1,2,3,4,5,6};
what I have to do is given the row1, row2, column1, column2, then print out the area's sum, and it will request to caulate different area for many times.
what I have think about it is first use nested loop to traverse the array and store each row's sum in sum_row and store each column's sum in sum_column and store the total element's sum im totalSum.
Then totalSum - the row and the columns that surrond it + the elemnts that has been minus twice.
But it seems fast enough, is there any algorithm that can do faster or some coding style tips that can make the factor little?
Thx in advance.
It seems to me that you have replaced one double iteration with another. The problem is in subtracting "the elemnts that has been minus twice"; unless I'm mistaken, this involves iterating over those elements to sum them.
Instead, just iterate over the rectangular area that you need to sum. I doubt it will be any slower.
A more efficient algorithm can be obtained by generating the matrix of summed upper-left matrices. (See the Wikipedia article on summed area table.) You can then compute any submatrix sum by looking up four area sums.