Is this micro-optimization, or is it optimization at all?
void Renderer::SetCamera(FLOAT x, FLOAT y, FLOAT z) {
// Checking for zero before doing addition?
if (x != 0) camX += x;
if (y != 0) camY += y;
if (z != 0) camZ += z;
// Checking if any of the three variables are not zero, and performing the code below.
if (x != 0 | y != 0 | z != 0) {
D3DXMatrixTranslation(&w, camX, camY, camZ);
}
}
Would running a for.. loop with the condition having vector.size() force the application to recount the elements in the vector on each loop?
std::vector<UINT> vect;
INT vectorSize = vect.size();
for (INT Index = 0; Index < vectorSize; Index++) {
// Do vector processing
}
// versus:
std::vector<UINT> vect;
for (INT Index = 0; Index < vect.size(); Index++) {
// Do vector processing
}
I'm using Visual Studio and as for the second question, it seems like something a compiler could optimize, but I'm just not sure on that.
Depending on the implementation of vector, the compiler may or may not understand that size is not changed. After all, you call different vector functions inside the loop, any of which might change size.
Since vector is a template, then the compiler knows everything about it, so if it works really hard, it could understand that size doesn't change, but that's probably too much work.
Often, you would want to write like this:
for (size_t i = 0, size = vect.size(); i < size; ++i)
...
While we're at it, a similar approach is used with iterators:
for (list<int>::iterator i = lst.begin(), end = lst.end(); i != end; ++i)
...
Edit: I missed the first part:
Is this optimization?
if (x != 0) camX += x;
if (y != 0) camY += y;
if (z != 0) camZ += z;
No. First of all, even if they were int, it wouldn't be optimization since checking and branching when the values are probably most of the times not zero is more work.
Second and more importantly, they are float. This means that besides the fact that you shouldn't directly compare them to 0, they are basically almost never exactly equal to 0. So the ifs are 99.9999% true.
Same thing applies to this:
if (x != 0 | y != 0 | z != 0)
In this case however, since matrix translation could be costly, you could do:
#define EPS 1e-6 /* epsilon */
if (x > EPS || x < -EPS || y > EPS || y < -EPS || z > EPS || z < -EPS)
and now yes, comparing to a matrix multiplication, this is probably an optimization.
Note also that I used || which gets short-circuited if for example right from the beginning x > EPS is true (it won't calculate the rest), but with | that won't happen.
I suspect that on many architectures the first three lines are an anti-optimization because they may introduce a floating point compare and then branch which can be slower than just always doing the addition (even if it's floating point).
On the other hand making sure that at least one component is non-zero before doing the transformation seems sound.
For your second case, size has to be constant time and will almost certainly be inlined out to a direct access to the vector's size. It's most likely fully optimizable. That said, sometimes it can make the code/loop easier to read by saving the size off because that clearly shows you are asserting the size won't change during the loop.
Firstly, Regarding the vector.size(), see
this SO question. On a side note, I haven't seen an implementation where std::vector::size() isn't O(1).
if (x != 0) camX += x; this cmp and consequent jne however is going to be slower than simply adding the variable x no matter what. Edit: Unless you expect well over 50 % cache misses on camX
The first one is probably a pessimisation, the check for 0 is probably slower than the addition. On top of that, in the check before the call to D3DXMatrixTranslation, you use | instead of the short-circuiting logical or ||. Since the check before the function call is probably a time-saver (or even semantically necessary), wrap the entire code in that check,
void Renderer::SetCamera(FLOAT x, FLOAT y, FLOAT z) {
if (x != 0 || y != 0 || z != 0) {
camX += x;
camY += y;
camZ += z;
D3DXMatrixTranslation(&w, camX, camY, camZ);
}
}
if all of x, y and z are zero, nothing need be done, otherwise, do all.
For the second, the compiler can hoist the vector.size() outside the loop if it can determine that the size doesn't change while the loop runs. If the compiler cannot determine that, it must not hoist the size() computation outside the loop.
Doing that yourself when you know that the size doesn't change is good practice.
Related
I have a problem. I want to write a method, which uses the PQ-Formula to calculate Zeros on quadratic algebra.
As I see C++ doesn't support Arrays, unlike C#, which I use normally.
How do I get either, ZERO, 1 or 2 results returned?
Is there any other way without Array, which doesn't exists?
Actually I am not into pointers so my actual code is corrupted.
I'd glad if someone can help me.
float* calculateZeros(float p, float q)
{
float *x1, *x2;
if (((p) / 2)*((p) / 2) - (q) < 0)
throw std::exception("No Zeros!");
x1 *= -((p) / 2) + sqrt(static_cast<double>(((p) / 2)*((p) / 2) - (q)));
x2 *= -((p) / 2) - sqrt(static_cast<double>(((p) / 2)*((p) / 2) - (q)));
float returnValue[1];
returnValue[0] = x1;
returnValue[1] = x2;
return x1 != x2 ? returnValue[0] : x1;
}
Actualy this code is not compilable but this is the code I've done so far.
There are quite a fiew issues with; at very first, I'll be dropping all those totally needless parentheses, they just make the code (much) harder to read:
float* calculateZeros(float p, float q)
{
float *x1, *x2; // pointers are never initialized!!!
if ((p / 2)*(p / 2) - q < 0)
throw std::exception("No Zeros!"); // zeros? q just needs to be large enough!
x1 *= -(p / 2) + sqrt(static_cast<double>((p / 2)*(p / 2) - q);
x2 *= -(p / 2) - sqrt(static_cast<double>((p / 2)*(p / 2) - q);
// ^ this would multiply the pointer values! but these are not initialized -> UB!!!
float returnValue[1];
returnValue[0] = x1; // you are assigning pointer to value here
returnValue[1] = x2;
return x1 != x2 ? returnValue[0] : x1;
// ^ value! ^ pointer!
// apart from, if you returned a pointer to returnValue array, then you would
// return a pointer to data with scope local to the function – i. e. the array
// is destroyed upon leaving the function, thus the pointer returned will get
// INVALID as soon as the function is exited; using it would again result in UB!
}
As is, your code wouldn't even compile...
As I see C++ doesn't support arrays
Well... I assume you meant: 'arrays as return values or function parameters'. That's true for raw arrays, these can only be passed as pointers. But you can accept structs and classes as parameters or use them as return values. You want to return both calculated values? So you could use e. g. std::array<float, 2>; std::array is a wrapper around raw arrays avoiding all the hassle you have with the latter... As there are exactly two values, you could use std::pair<float, float>, too, or std::tuple<float, float>.
Want to be able to return either 2, 1 or 0 values? std::vector<float> might be your choice...
std::vector<float> calculateZeros(float p, float q)
{
std::vector<float> results;
// don't repeat the code all the time...
double h = static_cast<double>(p) / 2; // "half"
s = h * h; // "square" (of half)
if(/* s greater than or equal q */)
{
// only enter, if we CAN have a result otherwise, the vector remains empty
// this is far better behaviour than the exception
double r = sqrt(s - q); // "root"
h = -h;
if(/* r equals 0*/)
{
results.push_back(h);
}
else
{
results.reserve(2); // prevents re-allocations;
// admitted, for just two values, we could live with...
results.push_back(h + r);
results.push_back(h - r);
}
}
return results;
}
Now there's one final issue left: as precision even of double is limited, rounding errors can occur (and the matter is even worth if using float; I would recommend making all floats to doubles, parameters and return values as well!). You shouldn't ever compare for exact equality (someValue == 0.0), but consider some epsilon to cover badly rounded values:
-epsilon < someValue && someValue < +epsilon
Ok, in given case, there are two originally exact comparisons involved, we might want to do as little epsilon-comparisons as possible. So:
double d = r - s;
if(d > -epsilon)
{
// considered 0 or greater than
h = -h;
if(d < +epsilon)
{
// considered 0 (and then no need to calculate the root at all...)
results.push_back(h);
}
else
{
// considered greater 0
double r = sqrt(d);
results.push_back(h - r);
results.push_back(h + r);
}
}
Value of epsilon? Well, either use a fix, small enough value or calculate it dynamically based on the smaller of the two values (multiply some small factor to) – and be sure to have it positive... You might be interested in a bit more of information on the matter. You don't have to care about not being C++ – the issue is the same for all languages using IEEE754 representation for doubles.
I have this assigment in university where I'm given the code of a C++ game involving pathfinding. The pathfinding is made using a wave function and the assigment requires me to make a certain change to the way pathfinding works.
The assigment requires the pathfinding to always choose the path farthest away from any object other than clear space. Like shown here:
And here's the result I've gotten so far:
Below I've posted the part of the Update function concerning pathfinding as I'm pretty sure that's where I'll have to make a change.
for (int y = 0, o = 0; y < LEVEL_HEIGHT; y++) {
for (int x = 0; x < LEVEL_WIDTH; x++, o++) {
int nCost = !bricks[o].type;
if (nCost) {
for (int j = 0; j < 4; j++)
{
int dx = s_directions[j][0], dy = s_directions[j][1];
if ((y == 0 && dy < 0)
|| (y == LEVEL_HEIGHT - 1 && dy > 0)
|| (x == 0 && dx < 0)
|| (x == LEVEL_WIDTH - 1 && dx > 0)
|| bricks[o + dy * LEVEL_WIDTH + dx].type)
{
nCost = 2;
break;
}
}
}
pfWayCost[o] = (float)nCost;
}
}
Also here is the Wave function if needed for further clarity on the problem.
I'd be very grateful for any ideas on how to proceed, since I've been struggling with this for quite some time now.
Your problem can be reduced to a problem known as minimum-bottle-neck-spanning-tree.
For the reduction do the following:
calculate the costs for every point/cell in space as the minimal distance to an object.
make a graph were edges correspond to the points in the space and the weights of the edges are the costs calculated in the prior step. The vertices of the graph corresponds to the boundaries between cell.
For one dimensional space with 4 cells with costs 10, 20, 3, 5:
|10|20|3|5|
the graph would look like:
A--(w=10)--B--(w=20)--C--(w=3)--D--(w=5)--E
With nodes A-E corresponding to the boundaries of the cells.
run for example the Prim's algorithm to find the MST. You are looking for the direct way from the entry point (in the example above A) to the exit point (E) in the resulting tree.
I know this question has been discussed a lot. I searched the web and come up with an algorithm myself. I'm wondering whether it can serve as a default implementation that works fine in general unit tests (not some serious/professional numeric tests).
bool equal_to(double x, double y) {
using limits = std::numeric_limits<double>;
auto mag_x = std::abs(x);
auto mag_y = std::abs(y);
if (mag_x < mag_y) {
std::swap(x, y);
std::swap(mag_x, mag_y);
}
auto eps = limits::epsilon() * mag_x;
auto lb = x - eps;
auto ub = x + eps;
return lb < y && y < ub;
}
Just found a flaw. The last statement should be
return (x == y) || (lb < y && y < ub);
in case equal_to(0, 0);
No, this isn't sufficient. Your eps is too low and should probably be multiplied with the number of steps used to produce x and y (although these errors often cancel, this isn't guaranteed).
Furthermore, your rounding effects may be amplified by catastrophic cancellation. If x is about 0.1 because it was calculated as 10.0 - 9.9 then you should have used limits::epsilon * (10+9.9). You're too optimistic by about a factor of 100.
So recursion is not my strong point, and I have been challenged to make a recursive floodFill function that fills a vector of a vector of ints with 1's if the value is zero. My code keeps segfaulting for reasons beyond me. Perhaps my code will make that sound more clear.
This is the grid to be flood filled:
vector<vector<int> > grid_;
It belongs to an object I created called "Grid" that is basically a set of functions to help manipulate the vectors. The grid's values are initialized to all zeros.
This is my flood fill function:
void floodFill(int x, int y, Grid & G)
{
if (G.getValue(x,y))
{
G.setValue(x,y,1);
if(x < G.getColumns()-1 && x >= 0 && y < G.getRows()-1 && y >= 0)
floodFill(x+1,y,G);
if(x < G.getColumns()-1 && x >= 0 && y < G.getRows()-1 && y >= 0)
floodFill(x,y+1,G);
if(x < G.getColumns()-1 && x >= 0 && y < G.getRows()-1 && y >= 0)
floodFill(x-1,y,G);
if(x < G.getColumns()-1 && x >= 0 && y < G.getRows()-1 && y >= 0)
floodFill(x,y-1,G);
}
}
The intention here is to have the function check if a point's value is zero, and if it is, change it to one. Then it should check the one above it for the same. It does this until it either finds a 1 or hits the end of the vector. Then it tries another direction and keeps going until the same conditions as above and so on and so forth until its flood filled.
Can anyone help me fix this? Maybe tell me whats wrong?
Thanks!
if(x < G.getColumns()-1 && x >= 0 && y < G.getRows()-1 && y >= 0)
floodFill(x-1,y,G);
won't work, since you can access index -1 of the underlying vector if x == 0
Same goes for floodFill(x,y-1,G);
This code has a lot of problems. First of all you check with if(G.getValue(x,y)) whether the value at a position is 1, and if so, then you set it to 1 with G.setValue(x,y,1). Think about this for a second, this can't be right. When will you ever set non-zero values to 1?
Then, another more subtle point is that you shouldn't do the recursion into neighbors if they are already set to 1.
As it stands the code you have will likely run until you overflow the stack because just going to recurse forever on the 1's that are connected to wherever you start from.
How about this?
void floodFill(int x, int y, Grid &g) {
if(x >= g.getColumns() || y >= g.getRows()) {
return;
}
floodFill(x+1, y, g);
if( x == 0 ) {
floodFill(x, y+1, g);
}
g.setValue(x, y, 1)
}
I think that will fill the grid without every hitting the same coordinate multiple times, and whenever either index is out of bounds it just returns so no chance of a seg fault.
I have an algorithm which can find if a point is in a given polygon:
int CGlEngineFunctions::PointInPoly(int npts, float *xp, float *yp, float x, float y)
{
int i, j, c = 0;
for (i = 0, j = npts-1; i < npts; j = i++) {
if ((((yp[i] <= y) && (y < yp[j])) ||
((yp[j] <= y) && (y < yp[i]))) &&
(x < (xp[j] - xp[i]) * (y - yp[i]) / (yp[j] - yp[i]) + xp[i]))
c = !c;
}
return c;
}
given this, how could I make it check if its within a rectangle defind by Ptopleft and Pbottomright instead of a single point?
Thanks
Basically you know how in Adobe Illustrator you can drag to select all objects that fall within the selection rectangle? well I mean that. –
Can't you just find the minimum and maximum x and y values among the points of the polygon and check to see if any of the values are outside the rectangle's dimensions?
EDIT: duh, I misinterpreted the question. If you want to ensure that the polygon is encosed by a rectangle, do a check for each polygon point. You can do that more cheaply with the minimum/maximum x and y coordinates and checking if that rectangle is within the query rectangle.
EDIT2: Oops, meant horizontal, not vertical edges.
EDIT3: Oops #2, it does handle horizontal edges by avoiding checking edges that are horizontal. If you cross multiply however, you can avoid the special casing as well.
int isPointInRect( Point point, Point ptopleft, Point pbottomright) {
float xp[2] ;
xp[0] = ptopleft.x,
xp[1] = pbottomright.x;
float yp[2] ;
yp[0] = ptopleft.y ;
yp[1] = pbottomright.y ;
return CGlEngineFunctions::PointInPoly(2, xp, yp, point.x, point.y);
}
As mentioned before, for that specific problem, this function is an overkill. However, if you are required to use it, note that:
1. It works only for convex polygons,
2. The arrays holding the polygon's vertices must be sorted such that consecutive points in the array relate to adjacent vertices of your polygon.
3. To work properly, the vertices must be ordered in the "right hand rule" order. That means that when you start "walking" along the edges, you only make left turns.
That said, I think there is an error in the implementation. Instead of:
// c initialized to 0 (false), then...
c = !c;
you should have something like:
// c initialized to 1 (true), then...
// negate your condition:
if ( ! (....))
c = 0;