C++: how do you update player height based on height of terrain? - c++

I used to have a project that used a grey scale image to set the height of vertices in a simple flat mesh, which resulted in nice looking height mapped terrain. However, I have since converted my project to C++ and can no longer use the help of BufferedImage and such to use the old grey scale approach for creating height-mapped terrain from a flat mesh.
Because of this my C++ project now uses a .obj file for the terrain, but I am finding it very difficult to update the player/camera height as the user walks around the terrain, instead I just float through everything as the player height never changes or only does so very frequently (so I know the height is at least being updated, just not correctly).
Bellow is a small sample of code that does the actual updating of the player height based on the terrain.obj file, it does this by storing all the vertices of the .obj file and comparing each of their x and z components with the x and z components of the player and if there is a match then set the y value of the player to the current vertexes y position:
Vector3f *playerPos = freeMoveObjects[0]->GetParent()->GetTransform()->GetPos();
float playerXPos = playerPos->GetX();
float playerZPos = playerPos->GetZ();
int playerXPosInt = (int)playerXPos;
int playerZPosInt = (int)playerZPos;
for (Vector3f currentVector : meshObjects[0]->getMeshVertices()) {
int meshHeightXInt = (int)currentVector.GetX();
int meshHeightZInt = (int)currentVector.GetZ();
if (meshHeightXInt == playerXPosInt & meshHeightZInt == playerZPosInt){//currentVector.GetX() <= playerXPos & currentVector.GetZ() <= playerZPos) {
freeMoveObjects[0]->GetParent()->GetTransform()->GetPos()->SetY(currentVector.GetY());
}
}

What you're doing is very inefficient right now; you shouldn't have to loop through all vertices if you already have a heightmap which you can use for updating the player's position.
Why don't you store the heightmap in a pseudo-2d-array, like this:
class HeightMap {
private:
int width;
int height;
std::unique_ptr<int[]> data = std::make_unique<int[]>(width * height);
public:
HeightMap(int width, int height) : width{width}, height{height} {}
// you can default copy, move constructors and assignments
int heightAt(int x, int y) {
return heightmap[x + y * width];
}
}
Using heightAt you now have efficient random access to the data in the map. Thanks to unique_ptr you don't need to manage any memory manually and other constructors can be defaulted.
Note that you need to handle errors like x or y being out of range manually.

Related

Rendering an array of the same sprite at different locations

I am trying to recreate Space Invaders and I am having trouble with getting my invader sprite to display in a row instead of on top of each other. I have a STRUCT called Sprite that contains the x position, y position, width, height, and color and I am passing that information into a custom function that draws the sprite. Everything works if I am just creating one sprite. But I want to create multiple ones (say 5 for testing purposes). So I created an array of the Sprite structure and in a For-Loop I tried to change the x position of each element within the array and than use the custom function to draw an element of the array at each iteration of the loop but for some reason I am not getting the proper result. I can see in the debugger that the x position is in fact being changed but everything is still getting drawn on top of each other I think.
struct INVADER
{
int xPos = 100;
int yPos;
int width;
int height;
D3DCOLOR color;
INVADER()
{
xPos;
yPos;
width = 64;
height = 64;
color = D3DCOLOR_XRGB(255, 255, 255);
}
};
INVADER invaderArmy[5];
for (int index = 0; index < 5; index++)
{
//invaderArmy->xPos = 100;
invaderArmy[index].xPos *= index;
Draw_And_Rotate_Sprite(invaderImage, invaderArmy[index].xPos);
}
I'm only wiriting here because i dont have 50 rep to comment.
Shouldn't the parameter from the called Draw_And_Rotate_Sprite() function be invaderArmy[index] instead of invaderArmy[index].xPos ?
Also there isn't a problem i can see in the code presented. Please show us the Draw_And_Rotate_Sprite() function definition.

How to rotate an object so it faces another?

I am making a game in opengl, and i can't figure out how to make my enemy characters turn to face my player. I only need the enemy to rotate on the y axis towards the player. Then I want them to move towards him.I have tried a bunch of different methods but haven't been able to get anything to work.
There are a few things you need to decide on yourself at the beginning of the project to be used throughout the project, like the representation of positions and the orientation (as well as the setup of the screen/clip planes etc.) However, you haven't mentioned any of this. So you may have to adapt the code below to suit your game, but it should be easily adaptable and applicable.
For the following example, I'll assume that -y axis is the top of your screen.
#include <math.h> // atan2
// you need to way to represent position and directions
struct vector2{
float x;
float y;
} playerPosition, enemyPosition;
float playerRotation;
// setup the instances and values
void setup() {
// Set some default values for the positions
playerPosition.x = 100;
playerPosition.y = 100;
enemyPosition.x = 200;
enemyPosition.y = 300;
}
// called every frame
void update(float delta){
// get the direction vector between the player and the enemy. We can then use this to both calculate the rotation angle between the two as well as move the player towards the enemy.
vector2 dirToEnemy;
dirToEnemy.x = playerPosition.x - enemyPosition.x;
dirToEnemy.y = playerPosition.y - enemyPosition.y;
// move the player towards the enemy
playerPosition.x += dirToEnemy.x * delta * MOVEMENT_SPEED;
playerPosition.y += dirToEnemy.y * delta * MOVEMENT_SPEED;
// get the player angle on the y axis
playerRotation = atan2(-dirToEnemy.y, dirToEnemy.x);
}
void draw(){
// use the playerPosition and playerAngle to render the player
}
Using the above code, you should be able to move your player object around and set the angle of rotation (you need to watch out for radians/degrees of the returned and expected angle values).

Isometric Collision - 'Diamond' shape detection

My project uses an isometric perspective for the time being I am showing the co-ordinates in grid-format above them for debugging. However, when it comes to collision/grid-locking of the player, I have an issue.
Due to the nature of sprite drawing, my maths is creating some issues with the 'triangular' corner empty areas of the textures. I think that the issue is something like below (blue is what I think is the way my tiles are being detected, whereas the red is how they ideally should be detected for accurate roaming movement on the tiles:
As you can see, the boolean that checks the tile I am stood on (which takes the pixel central to the player's feet, the player will later be a car and take a pixel based on the direction of movement) is returning false and denying movement in several scenarios, as well as letting the player move in some places that shouldn't be allowed.
I think that it's because the cutoff areas of each texture are (I think) being considered part of the grid area, so when the player is in one of these corner areas it is not truly checking the correct tile, and so returning the wrong results.
The code I'm using for creating the grid is this:
int VisualComponent::TileConversion(Tile* tileToConvert, bool xOrY)
{
int X = (tileToConvert->x - tileToConvert->y) * 64; //change 64 to TILE_WIDTH_HALF
int Y = (tileToConvert->x + tileToConvert->y) * 25;
/*int X = (tileToConvert->x * 128 / 2) + (tileToConvert->y * 128 / 2) + 100;
int Y = (tileToConvert->y * 50 / 2) - (tileToConvert->x * 50 / 2) + 100;*/
if (xOrY)
{
return X;
}
else
{
return Y;
}
}
and the code for checking the player's movement is:
bool Clsentity::CheckMovementTile(int xpos, int ypos, ClsMapData* mapData) //check if the movement will end on a legitimate road tile UNOPTIMISED AS RUNS EVERY FRAME FOR EVERY TILE
{
int x = xpos + 7; //get the center bottom pixel as this is more suitable than the first on an iso grid (more realistic 'foot' placement)
int y = ypos + 45;
int mapX = (x / 64 + y / 25) / 2; //64 is TILE-WIDTH HALF and 25 is TILE HEIGHT
int mapY = (y / 25 - (x / 64)) / 2;
for (int i = 0; i < mapData->tilesList.size(); i++) //for each tile of the map
{
if (mapData->tilesList[i]->x == mapX && mapData->tilesList[i]->y == mapY) //if there is an existing tile that will be entered
{
if (mapData->tilesList[i]->movementTile)
{
HAPI->DebugText(std::to_string(mapX) + " is the x and the y is " + std::to_string(mapY));
return true;
}
}
}
return false;
}​
I'm a little stuck on progression until having this fixed in the game loop aspect of things. If anyone thinks they either know the issue from this or might be able to help it'd be great and I would appreciate it. For reference also, my tile textures are 128x64 pixels and the math behind drawing them to screen treats them as 128x50 (to cleanly link together).
Rather than writing specific routines for rendering and click mapping, seriously consider thinking of these as two views on the data, which can be transformed in terms of matrix transformations of a coordinate space. You can have two coordinate spaces - one is a nice rectangular grid that you use for positioning and logic. The other is the isometric view that you use for display and input.
If you're not familiar with linear algebra, it'll take a little bit to wrap your head around it, but once you do, it makes everything trivial.
So, how does that work? Your isometric view is merely a rotation of a bog standard grid view, right? Well, close. Isometric view also changes the dimensions if you're starting with a square grid. Anyhow: can we just do a simple coordinate transformation?
Logical coordinate system -> display system (e.g. for rendering)
Texture point => Rotate 45 degrees => Scale by sqrt(2) because a 45 degree rotation changes the dimension of the block by sqrt(1 * 1 + 1 * 1)
Display system -> logical coordinate system (e.g. for mapping clicks into logical space)
Click point => descale by sqrt(2) to unsquish => unrotate by 45 degrees
Why?
If you can do coordinate transformations, then you'd be dealing with a pretty bog-standard rectangular grid for everything else you write, which will make your any other logic MUCH simpler. Your calculations there won't involve computing angles or slopes. E.g. now your "can I move 'down'" logic is much simpler.
Let's say you have 64 x 64 tiles, for simplicity. Now transforming a screen space click to a logical tile is simply:
(int, int) whichTile(clickX, clickY) {
logicalX, logicalY = transform(clickX, clickY)
return (logicalX / 64, logicalY / 64)
}
You can do checks like see if x0,y0 and x1,y1 are on the same tile, in the logical space by someting as simple as:
bool isSameTile(x0, y0, x1, y1) {
return floor(x0/64) == floor(x1/64) && floor(y0/64) == floor(y1/64)
}
Everything gets much simpler once you define the transforms and work in the logical space.
http://en.wikipedia.org/wiki/Rotation_matrix
http://en.wikipedia.org/wiki/Scaling_%28geometry%29#Matrix_representation
http://www.alcove-games.com/advanced-tutorials/isometric-tile-picking/
If you don't want to deal with some matrix library, you can do the equivalent math pretty straightforwardly, but if you separate concerns of logic management from display / input through these transformations, I suspect you'll have a much easier time of it.

How to solve performance issues with QPixmap (large drawingjobs)?

I am coding a small map editor (with rectangle tiles) and I need a way to draw a large amount of images OR one big image. The application is simple: You draw images on an empty screen with your mouse and when you are finished you can save it. A tile consists of a small image.
I tried out several solutions to display the tiles:
Each tile has its own QGraphicsItem (This works until you have a
1000x1000 map)
Each tile gets drawn on one big QPixmap (This means a very large image. Example: Map with 1000x100 and each tile has a size of 32x32 means that the QPixmap has a size of 32000x32000. This is a problem for QPainter.)
The current solution: Iterate through width & height of the TileLayer and draw each single tile with painter->drawPixmap(). The paint() method of my TileLayer looks like this:
void TileLayerGraphicsItem::paint(QPainter* painter, const QStyleOptionGraphicsItem* option,QWidget* /*widget*/)
{
painter->setClipRect(option->exposedRect);
int m_width=m_layer->getSize().width();
int m_height=m_layer->getSize().height();
for(int i=0;i<m_width;i++)
{
for(int j=0;j<(m_height);j++)
{
Tile* thetile=m_layer->getTile(i,j);
if(thetile==NULL)continue;
const QRectF target(thetile->getLayerPos().x()*thetile->getSize().width(),thetile->getLayerPos().y()*thetile->getSize().height(),thetile->getSize().width(),thetile->getSize().height());
const QRectF source(0, 0, thetile->getSize().width(), thetile->getSize().height());
painter->drawImage(target,*thetile->getImage(),source);
}
}}
This works for small maps with 100x100 or even 1000x100 tiles. But not for 1000x1000. The whole application begins to lag, this is of course because I have a for loop that is extremely expensive. To make my tool useful I need to be able to make at least 1000x1000 tilemaps without lags. Does anyone have an idea what I can do? How should I represent the tiles?
Update:
I changed the following: Only maps that exceed the window size of the minimap will be drawn with drawing single pixels for each tile. This is my render function now:
void RectangleRenderer::renderMinimapImage(QPainter* painter, TileMap* map,QSize windowSize)
{
for(int i=0;i<map->getLayers().size();i++)
{
TileLayer* currLayer=map->getLayers().at(i);
//if the layer is small draw it completly
if(windowSize.width()>currLayer->getSize().width()&&windowSize.height()>currLayer->getSize().height())
{
...
}
else // This is the part where the map is so big that only some pixels are drawn!
{
painter->fillRect(0,0,windowSize.width(),windowSize.height(),QBrush(QColor(map->MapColor)));
for(float i=0;i<windowSize.width();i++)
for(float j=0;j<windowSize.height();j++)
{
float tX=i/windowSize.width();
float tY=j/windowSize.height();
float pX=lerp(i,currLayer->getSize().width(),tX);
float pY=lerp(j,currLayer->getSize().height(),tY);
Tile* thetile=currLayer->getTile((int)pX,(int)pY);
if(thetile==NULL)continue;
QRgb pixelcolor=thetile->getImage()->toImage().pixel(thetile->getSize().width()/2,thetile->getSize().height()/2);
QPen pen;
pen.setColor(QColor::fromRgb(pixelcolor));
painter->setPen(pen);
painter->drawPoint(i,j);
}
}
}
}
This works not correct, however it is pretty fast. The problem is my lerp(linear interpolation) function to get the correct tiles to draw a pixel from.
Does anyone have a better solution to get the correct tiles while I iterate through the minimap pixels? At the moment I use linear interpolation between 0 and the maximum size of the tilemap and it does not work correctly.
UPDATE 2
//currLayer->getSize() returns how many tiles are in the map
// currLayer->getTileSize() returns how big each tile is (32 pixels width for example)
int raw_width = currLayer->getSize().width()*currLayer->getTileSize().width();
int raw_height = currLayer->getSize().height()*currLayer->getTileSize().height();
int desired_width = windowSize.width();
int desired_height = windowSize.height();
int calculated_width = 0;
int calculated_height = 0;
// if dealing with a one dimensional image buffer, this ensures
// the rows come out clean, and you don't lose a pixel occasionally
desired_width -= desired_width%2;
// http://qt-project.org/doc/qt-5/qt.html#AspectRatioMode-enum
// Qt::KeepAspectRatio, and the offset can be used for centering
qreal ratio_x = (qreal)desired_width / raw_width;
qreal ratio_y = (qreal)desired_height / raw_height;
qreal floating_factor = 1;
QPointF offset;
if(ratio_x < ratio_y)
{
floating_factor = ratio_x;
calculated_height = raw_height*ratio_x;
calculated_width = desired_width;
offset = QPointF(0, (qreal)(desired_height - calculated_height)/2);
}
else
{
floating_factor = ratio_y;
calculated_width = raw_width*ratio_y;
calculated_height = desired_height;
offset = QPointF((qreal)(desired_width - calculated_width)/2,0);
}
for (int r = 0; r < calculated_height; r++)
{
for (int c = 0; c < calculated_width; c++)
{
//trying to do the following: use your code to get the desired pixel. Then divide that number by the size of the tile to get the correct pixel
Tile* thetile=currLayer->getTile((int)((r * floating_factor)*raw_width)/currLayer->getTileSize().width(),(int)(((c * floating_factor)*raw_height)/currLayer->getTileSize().height()));
if(thetile==NULL)continue;
QRgb pixelcolor=thetile->getImage()->toImage().pixel(thetile->getSize().width()/2,thetile->getSize().height()/2);
QPen pen;
pen.setColor(QColor::fromRgb(pixelcolor));
painter->setPen(pen);
painter->drawPoint(r,c);
}
}
Trying to reverse engineer the example code, but it still does not work correctly.
Update 3
I tried (update 1) with linear interpolation again. And while I looked at the code I saw the error:
float pX=lerp(i,currLayer->getSize().width(),tX);
float pY=lerp(j,currLayer->getSize().height(),tY);
should be:
float pX=lerp(0,currLayer->getSize().width(),tX);
float pY=lerp(0,currLayer->getSize().height(),tY);
That's it. Now it works.
This shows how to do it properly. You use a level of detail (lod) variable to determine how to draw the elements that are currently visible on the screen, based on their zoom.
http://qt-project.org/doc/qt-5/qtwidgets-graphicsview-chip-example.html
Also don't iterate through all the elements that could be visible, but only go through the ones that have changed, and of those, only the ones that are currently visible.
Your next option to use is some other manual caching, so you don't have to repeatedly iterate through O(n^2) constantly.
If you can't optimize it for QGraphicsView/QGraphicsScene... then OpenGL is probably what you may want to look into. It can do a lot of the drawing and caching directly on the graphics card so you don't have to worry about it as much.
UPDATE:
Pushing changes to QImage on a worker thread can let you cache, and update a cache, while leaving the rest of your program responsive, and then you use a Queued connection to get back on the GUI thread to draw the QImage as a Pixmap.
QGraphicsView will let you know which tiles are visible if you ask nicely:
http://qt-project.org/doc/qt-5/qgraphicsview.html#items-5
UPDATE 2:
http://qt-project.org/doc/qt-5/qtwidgets-graphicsview-chip-chip-cpp.html
You may need to adjust the range of zooming out that is allowed on the project to test this feature...
Under where it has
const qreal lod = option->levelOfDetailFromTransform(painter->worldTransform());
if (lod < 0.2) {
if (lod < 0.125) {
painter->fillRect(QRectF(0, 0, 110, 70), fillColor);
return;
}
QBrush b = painter->brush();
painter->setBrush(fillColor);
painter->drawRect(13, 13, 97, 57);
painter->setBrush(b);
return;
}
Add in something like:
if(lod < 0.05)
{
// using some sort of row/col value to know which ones to not draw...
// This below would only draw 1/3 of the rows and 1/3 of the column
// speeding up the redraw by about 9x.
if(row%3 != 0 || col%3 != 0)
return;// don't do any painting, return
}
UPDATE 3:
Decimation Example:
// How to decimate an image to any size, properly
// aka fast scaling
int raw_width = 1000;
int raw_height = 1000;
int desired_width = 300;
int desired_height = 200;
int calculated_width = 0;
int calculated_height = 0;
// if dealing with a one dimensional image buffer, this ensures
// the rows come out clean, and you don't lose a pixel occasionally
desired_width -= desired_width%2;
// http://qt-project.org/doc/qt-5/qt.html#AspectRatioMode-enum
// Qt::KeepAspectRatio, and the offset can be used for centering
qreal ratio_x = (qreal)desired_width / raw_width();
qreal ratio_y = (qreal)desired_height / raw_height();
qreal floating_factor = 1;
QPointF offset;
if(ratio_x < ratio_y)
{
floating_factor = ratio_x;
calculated_height = raw_height*ratio_x;
calculated_width = desired_width;
offset = QPointF(0, (qreal)(desired_height - calculated_height)/2);
}
else
{
floating_factor = ratio_y;
calculated_width = raw_width*ratio_y;
calculated_height = desired_height;
offset = QPointF((qreal)(desired_width - calculated_width)/2);
}
for (int r = 0; r < calculated_height; r++)
{
for (int c = 0; c < calculated_width; c++)
{
pixel[r][c] = raw_pixel[(int)(r * floating_factor)*raw_width][(int)(c * floating_factor)];
}
}
Hope that helps.

How to minimalize the time to load Vectors?

I´m currently trying to develop a game and im having some trouble with the Map.
The Map works the Following way: We have a class named Map, which will contain a vector of Tiles.
class GMap
{
private :
std::vector <BTiles> TileList;
...
So, there will be a function Load in GMap which will load all the tiles from a txt file.
All the tiles have their own function, like render for example. And their own variables, like ID and Type of Tile.
I can easily render the tiles, but my problem is that, since the maps are kind of big, and each tile is only 16x16 pixels, it takes a lot of them to fill the whole Surface. And since there are so many of them, it takes way too long to load it. Like, 30-40 seconds for a small part of them.
I still havent developed the code that actually reads the txt file, which will contain the information of how many tiles to load, which types are them and their position, so i have been using this code to test the Tile Rendering.
bool GMap::Load(char *File)
{
int XRand;
for(int i = 0;i < 1024;i++) //I need 1024 tiles to load a screen of 512x512 pixels.
{
BTiles NewTile; //Btile is the Tiles Class.
XRand = rand() % 5; //There are currently only 5 types of Tile. And i wanted to print them randomly, just for testing.
NewTile.OnLoad(XRand, i); //This will be setting type = Xrand, and ID = i. The Type will define which block to print on the screen. And the ID will define where to print it.
TileList.push_back(NewTile);
}
return true;
}
This is the Tiles OnLoad function:
bool BTiles::OnLoad(int BType, int BID)
{
if((BSurface = Surface::OnLoad("BTexture.png")) == false)
return false;
Type = BType;
ID = BID;
return true;
}
I can then print all of the tiles the following way:
void GMap::Render(SDL_Surface *MainSurface)
{
for(int i = 0;i < TileList.size();i++)
{
TileList[i].OnRender(MainSurface); //I am calling a Render function inside the Tile Class. MainSurface is the primary surface im using to render images.
}
But My problem is in the Load Function. It takes way too much time to load those 1024 Tiles. And 1024 tiles are only a few of the amount i will actually have to load in a serious map. Besides, it wont even load them all. After the huge amount of time it takes to "load" the 1024 tiles, it only prints like, half of them. Like, the screen isnt complete with tiles, even though i "loaded" the correct amount to fill the whole screen. I then proceeded to increase the number from 1024 to 2048, in hope that it would finish the screen. But it didnt, in fact, it changed NOTHING. Its like, it loads certain amount, and then it just stops. Or at least it stops rendering.
If anyone wants to know how the rendering is made, i have a Global function which will do the work, and then, on the Tile Class, i have this function:
void BTiles::OnRender(SDL_Surface *MSurface)
{
int X = (ID * 16) % M_WIDTH; //Since i am only using the ID to know which position to put a Tile, i use this function to locate which Horizontal Position to put them. M_WIDTH is a global variable that defines the Width of the screen, it is currently 512
int Y = ((ID * 16) / M_HEIGHT) * 16; //The same but for the Vertical Position. M_HEIGHT is currently also 512
Surface::OnDraw(MSurface, BSurface, X, Y, (Type * 16) % M_WIDTH, (Type * 16) / M_HEIGHT, 16, 16); //This means Render(On The Primary Surface, using the Images on the BSurface, on the Position X, on the position Y, Where the Tile i want to render starts on the X line, Where the Tile i want to render starts on the Y line, it is 16 bits Width, it is 16 bits Height
}
I apologize i didnt explain properly the last function, but i dont think my problem is there.
Anyway if anyone need more info, in a part of the code, just ask.
Thank you!
I Discovered the Problem. Each tile had its own Surface, which would load the same image. That means that i was generating 1024 surfaces, and loading 1024 surfaces. What i did to solve the problem was to create a Surface in the Map Class, which would be used by all Tiles.
So
bool BTiles::OnLoad(int BType, int BID)
{
if((BSurface = Surface::OnLoad("BTexture.png")) == false)
return false;
Type = BType;
ID = BID;
return true;
}
became
bool BTiles::OnLoad(int BType, int BID)
{
Type = BType;
ID = BID;
return true;
}
In The Map Class i added the MSurface, which would load the Image that would contain all Tile Blocks.
And then to render i would do the following:
void GMap::Render(SDL_Surface *MainSurface)
{
for(int i = 0;i < TileList.size();i++)
{
TileList[i].OnRender(MainSurface, MSurface, 0, 0);
}
}
Msurface is the Surface that contained the Image.
And each tile would receive MSurface as an external surface, yet it would be used to hold all images.
Therefore instead of creating 1024 Surfaces, i only created 1. Now it takes 2 seconds to load a lot more than it would before. It also fixed my problem of the Not-Rendering all Tiles.