I have a c++ program which the user will click on two points on the screen and I have to create a logarithym scale from that.. Like:
10 100 1000 10000
given that my first point is at 10 supossed pixel 5 and 10000 is given at pixel 200
So how do I calculate the equation that would make my mouse show the log value when it points to the screen.
Thanks.
All you need is the log function. Let's first assume no offset. If you are given a value of x on the X-axis, you can get it's log value (e.g. in base 10) by:
log(x) / log(10)
If you want x to count from a certain offset (say x0), you should adjust x:
log(x - x0) / log(10)
If you want the resulting point to be offset at a point (say lx0), well just do it:
log(x - x0) / log(10) + lx0
Related
I have a population of so called "Dots" that search for food. Every Dot has a sight_ value, which indicates the range in which it can see food.
The position of each Dot is saved as a pair<uint16_t,uint16_t>. The positions of all foodsources are in a vector<pair<uint16_t,uint16_t>>.
Now I want to calculate the closest foodsource for every Dot, which this Dot can see. And I don't want to calculate the distance of every combination.
My idea was to create a copy of the food-vector, sort one copy by x and the other by y. Then find the interval [x-sight, x+sight] respectively [y-sight, y+sight] in the vectors and then create the intersection of both.
I've read over set_intersection, but it requires both ranges to be sorted with the same rule.
Any Ideas how I could do this? Could also be that my Idea is just the wrong approach.
Thanks
IceFreez3r
Edit:
I did some runtime approximations:
Sort Food: n log n
Find Interval for one Coordinate and one Dot: 2 log n (lower and upper bound)
If we assume equal distribution of food sources, we can calculate the bound that is estimated to be closer to the middle first and then calculate the second bound in the rest interval. This would reduce the runtime to: log n + log(n/2) (Just realized this s probably not *that* powerful:log(n/2) =~ log(n) - 1)
Build intersection: #x * #y =~ (n * sight/testgroundsize)^2
Compute exact Distance for every Food in Intersection: n * (sight/testgroundsize)^2
Sum: 2 n log n + 2 * #Dots * (log n + log(n/2) + (n * sight/testgroundsize)^2 + n * (sight/testgroundsize)^2)
Sum with just limiting one coordinate: n log n + #Dots * (log n + log(n/2) + n * sight/testgroundsize)
I did some tests and just calculated the above formulas on the run:
int dots = dots_.size();
int sum = 2 * n * log(n) + 2 * dots * (log(n) + log(n/2) + pow(n * (sum_sight / dots) / testground_size_,2) + n * pow((sum_sight / dots) / testground_size_, 2));
int sum2 = n * log(n) + dots * (log(n) + log(n/2) + n * (sum_sight / dots) / testground_size_);
cout << n*dots << endl << sum << endl << sum2 << endl;
It turned out the Intersection idea is just bad. While the idea of just limiting one coordinate is at least better than brute-force.
I didn't think about the grid-idea yet #Daniel Jour
You're stepping into a whole field of interesting approaches to this problem. Terms to Google are binary space partitioning, quadtrees, ... and of course nearest neighbour search.
A relatively simple but effective approach when the dots are far more spread than what their "visible range" is:
Select a value "grid size".
Create a map from grid coordinates to a list/set of entities
For each food source: put them in the map at their grid coordinates
For each dot: put them in the map at their grid coordinates and also in the neighbour grid "cells". The size of the neighbourhood depends on the grid size and the dot's sight value
For each entry in the map which contains at least one dot: Either do this algorithm recursively with a smaller grid size or use the brute force approach: check each dot in that grid cell against each food source in that grid cell.
This is a linear algorithm, compared with the quadratic brute force approach.
Calculation of grid coordinates: grid_x = int(x / grid_size) ... same for other coordinate.
Neighbourhood: steps = ceil(sight_value / grid_size) .. the neighbourhood is a square with side length 2×steps + 1 centred at the dot's grid coordinates
I believe your approach is incorrect. This can be mathematically verified. What you can do instead is calculate the magnitude of the vector joining the dot with the food source by means of Pythagoras theorem, and ensure that this magnitude is less than the observation limit. This deals exclusively with determining relative distance, as defined by the Cartesian co-ordinate system, and the standard unit of measurement. In relation to efficiency concerns, the first order of business is to determine if the approach to be taken is in computational terms in actuality less efficient, as measured by time, even though the logical component responsible for certain calculations are, in virtue of this alternative implementation, less time consuming. Of coarse, the ideal is one in which the time taken is decreased, and not merely numerically contained by means of refactoring.
Now, if it is the case that the position of a dot can be specified as any two numbers one may choose, this of course implies a frame of reference called the basis, and also one local to the dot in question. With respect to both, one can quantify position, and other such characteristics and properties. As a consequence of this observation, it would seem that you need n*2 data structures, where n is the amount of dots in the environment, that
contain the sorted values relative to each dot, and quite frankly it is unclear whether or not this approach would even work or is optimal. You state the design and programmatic constraint that the solution shall not compute the distances from each dot to each food source. But to achieve this, one must implement other such procedures, in order that we derive the correct results. These comments are made in relation to my discussion on efficiency. Therefore, you may be better of simply calculating the distance in each case. This is somewhat elegant.
For a university project I am currently working on, I have to create a point cloud by reading images from this dataset. These are basically video frames, and for each frame there is an rgb image along with a corresponding depth image.
I am familiar with the equation z = f*b/d, however I am unable to figure out how the data should be interpreted. Information about the camera that was used to take the video is not provided, and also the project states the following:
"Consider a horizontal/vertical field of view of the camera 48.6/62
degrees respectively"
I have little to no experience in computer vision, and I have never encountered 2 fields of view being used before. Assuming I use the depth from the image as is (for the z coordinate), how would I go about calculating the x and y coordinates of each point in the point cloud?
Here's an example of what the dataset looks like:
Yes, it's unusual to specify multiple fields of view. Given a typical camera (squarish pixels, minimal distortion, view vector through the image center), usually only one field-of-view angle is given -- horizontal or vertical -- because the other can then be derived from the image aspect ratio.
Specifying a horizontal angle of 48.6 and a vertical angle of 62 is particularly surprising here, since the image is a landscape view, where I'd expect the horizontal angle to be greater than the vertical. I'm pretty sure it's a typo:
When swapped, the ratio tan(62 * pi / 360) / tan(48.6 * pi / 360) is the 640 / 480 aspect ratio you'd expect, given the image dimensions and square pixels.
At any rate, a horizontal angle of t is basically saying that the horizontal extent of the image, from left edge to right edge, covers an arc of t radians of the visual field, so the pixel at the center of the right edge lies along a ray rotated t / 2 radians to the right from the central view ray. This "righthand" ray runs from the eye at the origin through the point (tan(t / 2), 0, -1) (assuming a right-handed space with positive x pointing right and positive y pointing up, looking down the negative z axis). To get the point in space at distance d from the eye, you can just normalize a vector along this ray and multiply by it by d. Assuming the samples are linearly distributed across a flat sensor, I'd expect that for a given pixel at (x, y) you could calculate its corresponding ray point with:
p = (dx * tan(hfov / 2), dy * tan(vfov / 2), -1)
where dx is 2 * (x - width / 2) / width, dy is 2 * (y - height / 2) / height, and hfov and vfov are the field-of-view angles in radians.
Note that the documentation that accompanies your sample data links to a Matlab file that shows the recommended process for converting the depth images into a point cloud and distance field. In it, the fields of view are baked with the image dimensions to a constant factor of 570.3, which can be used to recover the field of view angles that the authors believed their recording device had:
atan(320 / 570.3) * (360 / pi / 2) * 2 = 58.6
which is indeed pretty close to the 62 degrees you were given.
From the Matlab code, it looks like the value in the image is not distance from a given point to the eye, but instead distance along the view vector to a perpendicular plane containing the given point ("depth", or basically "z"), so the authors can just multiply it directly with the vector (dx * tan(hfov / 2), dy * tan(vfov / 2), -1) to get the point in space, skipping the normalization step mentioned earlier.
According to the HOG process, as described in the paper Histogram of Oriented Gradients for Human Detection (see link below), the contrast normalization step is done after the binning and the weighted vote.
I don't understand something - If I already computed the cells' weighted gradients, how can the normalization of the image's contrast help me now?
As far as I understand, contrast normalization is done on the original image, whereas for computing the gradients, I already computed the X,Y derivatives of the ORIGINAL image. So, if I normalize the contrast and I want it to take effect, I should compute everything again.
Is there something I don't understand well?
Should I normalize the cells' values?
Is the normalization in HOG not about contrast anyway, but is about the histogram values (counts of cells in each bin)?
Link to the paper:
http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf
The contrast normalization is achieved by normalization of each block's local histogram.
The whole HOG extraction process is well explained here: http://www.geocities.ws/talh_davidc/#cst_extract
When you normalize the block histogram, you actually normalize the contrast in this block, if your histogram really contains the sum of magnitudes for each direction.
The term "histogram" is confusing here, because you do not count how many pixels has direction k, but instead you sum the magnitudes of such pixels. Thus you can normalize the contrast after computing the block's vector, or even after you computed the whole vector, assuming that you know in which indices in the vector a block starts and a block ends.
The steps of the algorithm due to my understanding - worked for me with 95% success rate:
Define the following parameters (In this example, the parameters are like HOG for Human Detection paper):
A cell size in pixels (e.g. 6x6)
A block size in cells (e.g. 3x3 ==> Means that in pixels it is 18x18)
Block overlapping rate (e.g. 50% ==> Means that both block width and block height in pixels have to be even. It is satisfied in this example, because the cell width and cell height are even (6 pixels), making the block width and height also even)
Detection window size. The size must be dividable by a half of the block size without remainder (so it is possible to exactly place the blocks within with 50% overlapping). For example, the block width is 18 pixels, so the windows width must be a multiplication of 9 (e.g. 9, 18, 27, 36, ...). Same for the window height. In our example, the window width is 63 pixels, and the window height is 126 pixels.
Calculate gradient:
Compute the X difference using convolution with the vector [-1 0 1]
Compute the Y difference using convolution with the transpose of the above vector
Compute the gradient magnitude in each pixel using sqrt(diffX^2 + diffY^2)
Compute the gradient direction in each pixel using atan(diffY / diffX). Note that atan will return values between -90 and 90, while you will probably want the values between 0 and 180. So just flip all the negative values by adding to them +180 degrees. Note that in HOG for Human Detection, they use unsigned directions (between 0 and 180). If you want to use signed directions, you should make a little more effort: If diffX and diffY are positive, your atan value will be between 0 and 90 - leave it as is. If diffX and diffY are negative, again, you'll get the same range of possible values - here, add +180, so the direction is flipped to the other side. If diffX is positive and diffY is negative, you'll get values between -90 and 0 - leave them the same (You can add +360 if you want it positive). If diffY is positive and diffX is negative, you'll again get the same range, so add +180, to flip the direction to the other side.
"Bin" the directions. For example, 9 unsigned bins: 0-20, 20-40, ..., 160-180. You can easily achieve that by dividing each value by 20 and flooring the result. Your new binned directions will be between 0 and 8.
Do for each block separately, using copies of the original matrix (because some blocks are overlapping and we do not want to destroy their data):
Split to cells
For each cell, create a vector with 9 members (one for each bin). For each index in the bin, set the sum of all the magnitudes of all the pixels with that direction. We have totally 6x6 pixels in a cell. So for example, if 2 pixels have direction 0 while the magnitude of the first one is 0.231 and the magnitude of the second one is 0.13, you should write in index 0 in your vector the value 0.361 (= 0.231 + 0.13).
Concatenate all the vectors of all the cells in the block into a large vector. This vector size should of course be NUMBER_OF_BINS * NUMBER_OF_CELLS_IN_BLOCK. In our example, it is 9 * (3 * 3) = 81.
Now, normalize this vector. Use k = sqrt(v[0]^2 + v[1]^2 + ... + v[n]^2 + eps^2) (I used eps = 1). After you computed k, divide each value in the vector by k - thus your vector will be normalized.
Create final vector:
Concatenate all the vectors of all the blocks into 1 large vector. In my example, the size of this vector was 6318
I have geometries in a Postgis database with GeoDjango's default SRID, WGS84, and have found lookups directly in degrees to be much faster than in kilometres, because the database can skip the projections I would think.
Basically, Place.objects.filter(location__distance__lte=(point, D(km=10))) is several orders of magnitude slower than Place.objects.filter(location__dwithin=(point, 10)) as the first query produces a full scan of the table. But sometimes I need to lookup places with a distance threshold in kilometres.
Is there a somewhat precise way to convert the 10 km to degrees for the query?
Maybe another equivalent lookup with the same performance that I should be using instead?
You have several approaches to deal with your problem, here are two of them:
If you do not care much about precision you could use dwithin and use a naive meter to degree conversion degree(x meters) -> x / 40000000 * 360. You would have nearly exact results nearby the equator, but as you go north or south your distance would get shrink (shit we are living on a sphere). Imagine a region that is a circle in the beginning and shrinks to a infinite narrow elipse approaching one of the poles.
If you care about precision you can use:
max_distance = 10000 # distance in meter
buffer_width = max_distance / 40000000. * 360. / math.cos(point.y / 360. * math.pi)
buffered_point = point.buffer(buffer_width)
Place.objects.filter(
location__distance__lte=(point, D(m=max_distance)),
location__overlaps=buffered_point
)
The basic idea is to query for a all points that are within a circle around your point in degree. This part is very performant as the circle is in degreee and the geo index can be used. But the circle is sometimes a bit too big, so we left the filter in meters to filter out places that may be a bit farer away than the allowed max_distance.
A small update to the answer of frankV.
max_distance = 10000 # distance in meter
buffer_width = max_distance / 40000000. * 360. / math.cos(point.y / 360. * math.pi)
buffered_point = point.buffer(buffer_width)
Place.objects.filter(
location__distance__lte=(point, D(m=max_distance)),
location__intersects=buffered_point
)
I found the __overlaps doesn't work with postgresql and a point, but __intersects does.
To be sure it helps your query to speed-up, check the explain plan of the query (queryset.query to get the used query.)
I am dealing with some positions of objects in Cocos2dx but this question can apply to virtually every situation in which a smooth start and stop is necessary.
Here's what I am looking for:
Given a origin position at x = 0 and a final position of x = 8, I want to accelerate slowly and get further the further I am from the start and then have it slow down as it reaches the end. Is there a smoothing algorithm for this?
There are lots of algorithms for this. One idea is to set up a linear interpolation:
x(t) = t * x0 + (1.0 - t) * x1;
If you feed evenly spaced values of t from 0.0 to 1.0, you'll get a smooth, linear animation.
If you want slow start and slow end, you can use t = sin(theta)/2.0 + 1.0 for theta from -pi/2 to pi/2.
A second-order smooth path has constant acceleration during the first half, then constant deceleration during the second part.
This means you accelerate from x=0 to x=4. The formula is x(t)=a*t*t so your choice of acceleration a directly influences the time needed. If you set the deceleration to the same value, you'll arrive after twice the time in x=8. The formula for the second part is therefore x(t) = 16 - a*t*t. The halfway point in time is t=sqrt(4/a).