Sorting objects to the front or back depending on their position - c++

I am trying to sort my renderables/actors correctly and noticed that I have some troubles with walls since they get sorted by their centerpoint. So I am sorting all my actors before I draw them depending on their distance to the camera with an insertion sort. After that, I am trying to determine if the wall should be drawn behind or in front of the gamefield. To explain this, the game takes place inside of a cube which is out of 6 planes. Since I can rotate the camera around that cube I need a sorting which would put the planes in front/back depending on that. So here is a picture so you know what we are talking about:
You can clearly see the rendermisstake whats happening at the front of those kind of snake.
Okay here is my current sorting:
//list of Actors the abstract class which Wall and cube and so on extend
void Group::insertionSort(vector<Actor *> &actors)
{
int j;
for (int i = 1; i < actors.size(); i++)
{
Actor *val = actors[i];
j = i - 1;
while (j >= 0 && distanceToCamera(*actors[j]) < distanceToCamera(*val))
{
actors[j + 1] = actors[j];
j = j - 1;
}
actors[j + 1] = val;
}
}
float Group::distanceToCamera(Actor &a)
{
float result = 0;
XMVECTOR posActor = XMLoadFloat3(&a.getPosition()); //here i get the centerpoint of the object
XMVECTOR posCamera = XMLoadFloat3(&m_camera->getPosition());
XMVECTOR length = XMVector3Length(posCamera - posActor);
XMStoreFloat(&result, length);
return result;
}
To determine if it's a Wall I used kind like this dynamic_cast<Wall*>(val) but I don't get them in front/back of the vector depending on that. To remember the objects return their centerpoint. Can anyone lead me to the right way?

It's difficult to answer your question because it is a complex system which you haven't fully explained here and which you should also reduce to something simpler before posting. Chances are that you would find a fix yourself on the way. Anyway, I'll do some guessing...
Now, the first thing I'd fix is the sorting algorithm. Without analysing it in depth whether it works correctly in all cases or not, I'd throw it out and use std::sort(), which is both efficient and very unlikely to contain errors.
While replacing it, you need to think about the ordering between two rendered objects carefully: The question is when exactly does one object need to be drawn before the other? You are using the distance of the center point to the camera. I'm not sure if you are sorting 2D objects or 3D objects, but in both cases it's easy to come up with examples where this doesn't work! For example, a large square that doesn't directly face the camera could cover up a smaller one, even if the smaller square's center is closer. Another problem is when two objects intersect. Similarly for 3D objects, if they have different sizes or intersect then your algorithm doesn't work. If your objects all have the same size and they can't intersect, you should be fine though.
Still, and here I suspect one problem, it could be that a surface of an object and a surface of the cube grid have exactly the same position. One approach is that you shrink the objects slightly or enlarge the outside grid, so that the order is always clear. This would also work around an issue that you suffer from floating point rounding errors. Due to these, two objects that don't have an order mathematically could end up in different positions depending on the circumstances. This can manifest as them flickering between visible to covered depending on the camera angle.
One last thing: I'm assuming you want to solve this yourself for educational reasons, right? Otherwise, it would be a plain waste of time with existing rendering toolkits in place that would even offload all the computations to the graphics hardware.

Related

Find coordinates in a vector c++

I'm creating a game in Qt in c++, and I store every coordinate of specific size into a vector like :
std::vector<std::unique_ptr<Tile>> all_tiles = createWorld(bgTile);
for(auto & tile : all_tiles) {
tiles.push_back(std::move(tile));
}
Each level also has some healthpacks which are stored in a vector aswell.
std::vector<std::unique_ptr<Enemy>> all_enemies = getEnemies(nrOfEnemies);
for(auto &healthPackUniquePtr : all_healthpacks) {
std::shared_ptr<Tile> healthPackPtr{std::move(healthPackUniquePtr)};
int x = healthPackPtr->getXPos();
int y = healthPackPtr->getYPos();
int newYpos=checkOverlapPos(healthPackPtr->getXPos(),healthPackPtr->getYPos());
newYpos = checkOverlapEnemy(healthPackPtr->getXPos(),newYpos);
auto healthPack = std::make_shared<HealthPack>(healthPackPtr->getXPos(), newYpos, healthPackPtr->getValue());
healthPacks.push_back(healthPack);
}
But know I'm searching for the fastest way to check if my player position is at an healthpack position. So I have to search on 2 values in a vector : x and y position. Anyone a suggestion how to do this?
Your 'real' question:
I have to search on 2 values in a vector : x and y position. Anyone a
suggestion how to do this?"
Is a classic XY question, so I'm ignoring it!
I'm searching for the fastest way to check if my player position is at
an healthpack position.
Now we're talking. The approach you are using now won't scale well as the number of items increase, and you'll need to do something similar for every pair of objects you are interested in. Not good.
Thankfully this problem has been solved (and improved upon) for decades, you need to use a spacial partitioning scheme such as BSP, BVH, quadtree/octree, etc. The beauty of the these schemes is that a single data structure can hold the entire world in it, making arbitrary item intersection queries trivial (and fast).
You can implement a callback system. Then a player moves a tile, fire a callback to that tile which the player is on. Tiles should know its state and could add health to a player or do nothing if there is nothing on that tile. Using this technique, you don`t need searching at all.
If all_leathpacks has less than ~50 elements I wouldn't bother to improve. Simple loop is going to be sufficiently fast.
Otherwise you can split the vector into sectors and check only for the elements in the same sector as your player (and maybe a few around if it's close to the edge).
If you need something that's better for the memory you and use a KD-tree to index the healtpacks and search for them fast (O(logN) time).

'Ray' creation for raypicking not fully working

I'm trying to implement a 'raypicker' for selecting objects within my project. I do not fully understand how to implement this, but I understand conceptually how it should work. I've been trying to learn how to do this, but most tutorials I find go way over my head. My current code is based on one of the recent tutorials I found, here.
After several hours of revisions, I believe the problem I'm having with my raypicker is actually the creation of the ray in the first place. If I substitute/hardcode my near/far planes with a coordinate that would undisputably be located within the region of a triangle, the picker identifies it correctly.
My problem is this: my ray creation doesn't seem to fully take my current "camera" or perspective into account, so camera rotation won't affect where my mouse is.
I believe to remedy this I need something like using gluUnProject() or something, but whenever I used this the x,y,z coordinates returned would be incredibly small,
My current ray creation is a mess. I tried to use methods that others proposed initially, but it seemed like whatever method I tried it never worked with my picker/intersection function.
Here's the code for my ray creation:
void oglWidget::mousePressEvent(QMouseEvent *event)
{
QVector3D nearP = QVector3D(event->x()+camX, -event->y()-camY, -1.0);
QVector3D farP = QVector3D(event->x()+camX, -event->y()-camY, 1.0);
int i = -1;
for (int x = 0; x < tileCount; x++)
{
bool rayInter = intersect(nearP, farP, tiles[x]->vertices);
if (rayInter == true)
i = x;
}
if (i != -1)
{
tiles[i]->showSelection();
}
else
{
for (int x = 0; x < tileCount; x++)
tiles[x]->hideSelection();
}
//tiles[0]->showSelection();
}
To repeat, I used to load up the viewport, model & projection matrices, and unproject the mouse coordinates, but within a 1920x1080 window, all I get is values in the range of -2 to 2 for x y & z for each mouse event, which is why I'm trying this method, but this method doesn't work with camera rotation and zoom.
I don't want to do pixel color picking, because who knows I may need this technique later on, and I'd rather not give up after the amount of effort I put in so far
As you seem to have problems constructing your rays, here's how I would do it. This has not been tested directly. You could do it like this, making sure that all vectors are in the same space. If you use multiple model matrices (or stacks thereof) the calculation needs to be repeated separately with each of them.
use pos = gluUnproject(winx, winy, near, ...) to get the position of the mouse coordinate on the near plane in model space; near being the value given to glFrustum() or gluPerspective()
origin of the ray is the camera position in model space: rayorig = inv(modelmat) * camera_in_worldspace
the direction of the ray is the normalized vector from the position from 1. to the ray origin: raydir = normalize(pos - rayorig)
On the website linked they use two points for the ray and they don't seem to normalize the ray direction vector, so this is optional.
Ok, so this is the beginning of my trail of breadcrumbs.
I was somehow having issues with the QT datatypes for the matrices, and the logic pertaining to matrix transformations.
This particular problem in this question resulted from not actually performing any transformations whatsoever.
Steps to solving this problem were:
Converting mouse coordinates into NDC space (within the range of -1 to 1: x/screen width * 2 - 1, y - height / height * 2 - 1)
grabbing the 4x4 matrix for my view matrix (can be the one used when rendering, or re calculated)
In a new vector, have it equal the inverse view matrix multiplied by the inverse projection matrix.
In order to build the ray, I had to do the following:
Take the previously calculated value for the matrices that were multiplied together. This will be multiplied by a vector 4 (array of 4 spots), where it will hold the previously calculated x and y coordinates, as well as -1, then +1.
Then this vector will be divided by the last spot value of the entire vector
Create another vector 4 which was just like the last, but instead of -1, put "1" .
Once again divide that by its last spot value.
Now the coordinates for the ray have been created at the far and near planes, so it can intersect with anything along it in the scene.
I opened a series of questions (because of great uncertainty with my series of problems), so parts of my problem overlap in them too.
In here, I learned that I needed to take the screen height into consideration for switching the origin of the y axis for a Cartesian system, since windows has the y axis start at the top left. Additionally, retrieval of matrices was redundant, but also wrong since they were never declared "properly".
In here, I learned that unProject wasn't working because I was trying to pull the model and view matrices using OpenGL functions, but I never actually set them in the first place, because I built the matrices by hand. I solved that problem in 2 fold: I did the math manually, and I made all the matrices of the same data type (they were mixed data types earlier, leading to issues as well).
And lastly, in here, I learned that my order of operations was slightly off (need to multiply matrices by a vector, not the reverse), that my near plane needs to be -1, not 0, and that the last value of the vector which would be multiplied with the matrices (value "w") needed to be 1.
Credits goes to those individuals who helped me solve these problems:
srobins of facepunch, in this thread
derhass from here, in this question, and this discussion
Take a look at
http://www.realtimerendering.com/intersections.html
Lot of help in determining intersections between various kinds of geometry
http://geomalgorithms.com/code.html also has some c++ functions one of them serves your purpose

How to fill space inside lines in OpenGL?

I want to fill space inside lines in this code.
Main parts of code:
struct point { float x; float y; };
point a = { 100, 100 };
point b = { 0, 200 };
point c = { 0, 0 };
point d = { 100, 0 };
void displayCB(void){
glClear(GL_COLOR_BUFFER_BIT);
DeCasteljau();
b.x = 200;
c.x = 200;
DeCasteljau();
glFlush();
}
How to fill this heart with red color (for example) ?
There's no flood fill if that's what you're looking for, and no way to write one since the frame buffer isn't generally read/write and you don't get to pass state between fragments.
What you'd normally do is tesselate your shape — convert from its outline to geometry that covers the internal area, then draw the geometry.
Tesselation is described by Wikipedia here. Ear clipping isn't that hard to implement; monotone polygons are quite a bit more efficient but the code is more tricky. Luckily the OpenGL Utility Library (GLU) implements it for you and a free version can be found via MESA — they've even unbundled it from the rest of their OpenGL reimplementation. A description of how to use it is here.
EDIT: see also the comments. If you're willing to do this per-pixel you can use a triangle fan with reverse face removal disabled that increments the stencil. Assuming the shape is closed that'll cause every point inside the shape to be painted an odd number of times and every point outside the shape to be painted an even number of times (indeed, possibly zero). You can then paint any shape you know to at least cover every required pixel with a stencil test on the least significant bit to test for odd/even.
(note my original suggestion of increment/decrement and test for zero makes an assumption that your shape is simple, i.e. no holes, the edge doesn't intersect itself, essentially it assumes that a straight line segment from anywhere inside to infinity will cross the boundary exactly once; Reto's improvement applies the rule that any odd number of crossings will do)
Setup for the stencil approach will be a lot cheaper, and simpler, but the actual cost of drawing will be a lot more expensive. So it depends how often you expect your shape to change and/or whether a pixel cache is appropriate.

Algorithm for sorting a two-dimensional array based on similarities of adjecent objects

I'm writing a program that is supposed
to sort a number of square tiles (of which
each side is colored in one of five colors—red, orange,
blue, green and yellow), that are laying next to each other
(eg 8 rows and 12 columns) in a way that as many sides with
the same color connect as possible.
So, for instance, a tile with right side colored
red should have a tile on the right that has a red left-side.)
The result is evaluated by counting how many non-matching pairs
of sides exist on the board. I'm pretty much done with the actual program;
I just have some trouble with my sorting algorithm. Right now I'm using
Bubble-sort based algorithm, that compares every piece on the board
with every other piece, and if switching those two reduces the amount of
non-matching pairs of sides on the board, it switches them. Here a
abstracted version of the sorting function, as it is now:
for(int i = 0 ; i < DimensionOfBoard.cx * DimensionOfBoard.cy ; i++)
for(int j = 0 ; j < DimensionOfBoard.cx * DiemsionOfBoard.cy ; j++)
{
// Comparing a piece with itself is useless
if(i == j)
continue;
// v1 is the amount of the nonmatching sides of both pieces
// (max is 8, since we have 2 pieces with 4 sides each (duh))
int v1 = Board[i].GetNonmatchingSides() + Board[j].GetNonmatchingSides();
// Switching the pieces, if this decreases the value of the board
// (increases the amount of nonmatching sides) we'll switch back)
SwitchPieces(Board[i], Board[j]);
// If switching worsened the situation ...
if(v1 < Board[i].GetNonmathcingSides() + Board[j].GetNonmatchingSides())
// ... we switch back to the initial state
SwitchPieces(Board[i], Board[j]);
}
As an explanation: Board is a pointer to an array of Piece Object. Each Piece has
four Piece-pointers that point to the four adjacent pieces (or NULL, if the Piece is
a side/corner piece.) And switching actually doesn't switch the pieces itself, but
rather switches the colors. (Instead of exchanging the pieces it scrapes off the color
of both and switches that.)
This algorithm doesn't work too bad, it significantly improves the value of the
board, but it doesn't optimize it as it should. I assume it's because side and corner
pieces can't have move than three/two wrong adjacent pieces, since one/two side(s)
are empty. I tried to compensate for that (by multiplying Board[i].GetMatchingPieces()
with Board[i].GetHowManyNonemptySides() before comparing), but that didn't help a bit.
And that's where I need help. I don't know very many sorting algorithms, let alone
those that work with two-dimensional arrays. So can anyone of you know about
an algorithmic concept that might help me to improve my work? Or can anyone see a
problem that I haven't found yet? Any help is appreciated. Thank you.
if there was a switch you have to re-evaluate a board, because there might be previous positions where now you could find an enhancement.
Note that you are going to find only a local minimum with those swappings. You might won't be able to find any enhancements but that doesn't mean that's the best board configuration.
One way to find a better configuration is to shuffle a board and search for a new local minumum, or use an algorithm-skeleten that allows you to make bigger jumps in the state, eg: Simulated annealing.

How do I fix eroded rectangles?

Basically, I have an image like this
or one with multiple rectangles within the same image. The rectangles are completely black and white have "dirty" edges and gouges, but it's pretty easy to tell they're rectangles. To be more precise, they are image masks. The white regions are parts of the image which are to be "left alone", but the black parts are to be made bitonal.
My question is, how do I make a nice and crisp rectangle out of this degraded one? I am a Python person, but I have to use Qt and C++ for this task. It would be preferable if no other libraries are used.
Thanks!
If the bounding box that contains all non-black pixels can do what you want, this should do the trick:
int boundLeft = INT_MAX;
int boundRight = -1;
int boundTop = INT_MAX;
int boundBottom = -1;
for(int y=0;y<imageHeight;++y) {
bool hasNonMask = false;
for(int x=0;x<imageWidth;++x) {
if(isNotMask(x, y)) {
hasNonMask = true;
if(x < boundLeft) boundLeft = x;
if(x > boundRight) boundRight = x;
}
}
if(hasNonMask) {
if(y < boundTop) boundTop = y;
if(y > boundBottom) boundBottom = y
}
}
If the result has negative size, then there's no non-mask pixel in the image. The code can be more optimized but I haven't had enough coffee yet. :)
Usually you'd do that by repeatedly dilating and eroding the mask. I don't think qt has premade functions for that, so you probably have to implement them yourself if you don't want to use libraries - http://ostermiller.org/dilate_and_erode.html has information on how to implement the functions.
For the moment, we'll assume they're all supposed to come out as rectangles with no rotation. In this case, you should be able to use a pretty simple approach. Starting from each pixel at the edge of the bitmap, start sampling pixels working your way inward until you encounter a transition. Record the distance from the edge for each transition (if there is one). Once you've done that from each edge, you basically "take a vote" -- the distance that occurred most often from that edge is what you treat as that edge of the rectangle. If the rectangle really is aligned, that should constitute a large majority of the distances.
If, instead you see a number of distances with nearly equal frequencies, chances are that the rectangle is rotated (or at least one edge is). In this case, you can divide the side in half (for example) and repeat. Once you've reached a large majority of points in each region agreeing on the distance, you can (attempt to) linearly interpolate between them to give a straight line (and limiting the minimum region size will limit the maximum rotation -- if you get to some size without reaching agreement, you're looking at a gouge, not the rectangle edge). Likewise, if you have a region (or more than one) that doesn't fit cleanly with the rest and won't fit with a line, you should probably ignore it as well -- again, you're probably looking at a gouge, not what's intended as an edge.