Determine which tile is clicked in a window - c++

I am drawing a tilemap on a SFML renderwindow. I want to determine which tile is clicked by the user, but I just cant seem to find a solution. First of all, each tile has 32 width and height.
What i try at the moment : Get the position of the click. Loop trough the tilemap until a tile is found which position is between 100. So if I click on (100,100) the tile should begin at (96,96) but this does not seem to work.
Here is my code snippet from the function getTile(mousepos x,mousepos y)
Tile* TileMap::getTile(int x, int y)
{
Tile *t = NULL;
for(int i = 0; i < tilemap.size(); i++)
{
for(int j = 0; j < tilemap[i].size(); j++)
{
if(x > tilemap[i][j].sprite.getPosition().x
&& x < (tilemap[i][j].sprite.getPosition().x+32))
{
if(y > tilemap[i][j].sprite.getPosition().y
&& y < (tilemap[i][j].sprite.getPosition().y+32))
{
t = &tilemap[i][j];
break;
}
}
}
}
return t;
}

Based on your code, I am going to assume that you are basing your tilemap on a 2d array of Tiles: tilemap[x][y]. I am also going to assume that tilemap[0][0] is the top left tile.
There should be a much easier way to find out which tile is being clicked on instead of testing every single tile.
If you are at 100,100 and tiles are 32x32, then we can get the x and y of the tile within the tilemap by doing something as simple as:
x = 100 / 32 = 3
y = 100 / 32 = 3
Therefor the tile in your tilemap that corresponds to a mouse position of (100,100) is tilemap[3][3].

Related

How to create a Minecraft chunk in opengl?

I'm learning OpenGL and I have tried to make a voxel game like Minecraft. In the beginning, everything was good. I created a CubeRenderer class to render a cube with its position. The below picture is what I have done.
https://imgur.com/yqn783x
And then I got a serious problem when I try to create a large terrain, I hit a slowing performance. It was very slow and fps just around 15fps, I thought.
Next, I figured out Minecraft chunk and culling face algorithm can solve slowing performance by dividing the world map into small pieces like chunk and just rendering visible faces of a cube. So how to create a chunk in the right way and how the culling face algorithm is applied in Chunk?
So far, that is what I have tried
I read about Chunk in Minecraft at https://minecraft.gamepedia.com/Chunk
I created a demo Chunk by this below code (it is not the completed code because I removed it out)
I created a CubeData that contains cube position and cube type.
And I call the GenerateTerrain function to make a simple chunk data (16x16x16) like below (CHUNK_SIZE is 16)
for (int x = 0; x < CHUNK_SIZE; x++) {
for (int y = 0; y < CHUNK_SIZE; y++) {
for (int z = 0; z < CHUNK_SIZE; z++) {
CubeType cubeType = { GRASS_BLOCK };
Location cubeLocation = { x, y, z };
CubeData cubeData = { cubeLocation, cubeType };
this->Cubes[x][y][z] = cubeData;
}
}
}
After that, I had a boolean array which is called "mask" contains two values are 0 (not visible) or 1 (visible) and matches with their cube data. And then I call Render function of Chunk class to render a chunk. This code below like what I have done (but it is not complete code because I removed that code and replaced with new code)
for (int x = 0; x < CHUNK_SIZE; x++) {
for (int y = 0; y < CHUNK_SIZE; y++) {
for (int z = 0; z < CHUNK_SIZE; z++) {
for(int side = 0; side < 6;side++){
if(this->mask[x][y][z][side] == true) cubeRenderer.Render(cubeData[x][y][z]);
}
}
}
}
But the result I got that everything still slow (but it is better than the first fps, from 15fps up to 25-30fps, maybe)
I guess it is not gpu problem, it is a cpu problem because there is too many loops in render call.
So I have kept research because I think my approach was wrong. There may have some right way to create a chunk, right?
So I found the solution that puts every visible verticle to one VBO. So I just have to call and bind VBO definitely one time.
So this below code show what I have tried
cout << "Generating Terrain..." << endl;
for (int side = 0; side < 6; side++) {
for (int x = 0; x < CHUNK_SIZE; x++) {
for (int y = 0; y < CHUNK_SIZE; y++) {
for (int z = 0; z < CHUNK_SIZE; z++) {
if (this->isVisibleSide(x, y, z, side) == true) {
this->cubeRenderer.AddVerticleToVBO(this->getCubeSide(side), glm::vec3(x, y, z), this->getTexCoord(this->Cubes[x][y][z].cubeType, side));
}
}
}
}
}
this->cubeRenderer.GenerateVBO();
And call render one time at all.
void CubeChunk::Update()
{
this->cubeRenderer.Render(); // with VBO data have already init above
}
And I got this:
https://imgur.com/YqsrtPP
I think my way was wrong.
So what should I do to create a chunk? Any suggestion?

How to account for spacing between tiles in a tile sheet

My tile-sheet has tiles that are 64x64, however between each tile there is a 10px gap and i need to account for that gap when setting the texture rectangle in the image in order to draw that tile
I tried simply adding the space upon setting the texture rectangle but the image still looks distorted
for (auto y = 0u; y < map.getTileCount().y; ++y)
{
for (auto x = 0u; x < map.getTileCount().x; ++x)
{
auto posX = static_cast<float>(x * map.getTileSize().x);
auto posY = static_cast<float>(y * map.getTileSize().y);
sf::Vector2f position(posX, posY);
tileSprite.setPosition(position);
auto tileID = tiles[y * map.getTileCount().x + x].ID; //the id of the current tile
if (tileID == 0)
{
continue; //empty tile
}
auto i = 0;
while (tileID < tileSets[i].getFirstGID())
{
++i;
}
auto relativeID = tileID - tileSets[i].getFirstGID();
auto tileX = relativeID % tileSets[i].getColumnCount();
auto tileY = relativeID / tileSets[i].getColumnCount();
textureRect.left = tileX * tileSets[i].getTileSize().x; //i am guessing this is where
// i should account for the spacing
textureRect.top = tileY * tileSets[i].getTileSize().y;
tileSprite.setTexture(mTextureHolder.get(Textures::SpriteSheet));
tileSprite.setTextureRect(textureRect);
mMapTexture.draw(tileSprite);
}
}
The code itself is working and its drawing the tiles in the correct sizes, if i use a normal 64x64 tileset without any spacing the final image looks right however with spacing included the tiles are cut out.
How do i add the gap between the tiles when setting the texture rectangle?
this is how it looks:
this is how it should look:
(NOTE: The "how it should look" image is from the Tiled editor )
Removing the spaces with a python script i found and gimp fixed the problem, however if anyone knows how to account for the spacing feel free to answer as i might need it someday

Tileset crop calculation C++

So I have this problem with my tileset rendering. It's a little tricky to explain but i'll do my best.
Basicly what i do is i load these "tile numbers" from TMX files and they only represent the tileset number in the tileset. So if i use a tile that is on a row below the first one i only get the number. But i want both X and Y position to calculate where i would crop the image when i display it in my game.
So my array would look something like this,
1,1,1,2,1,1,1,1,9,1,6,1,1
1,1,1,2,2,2,1,1,1,1,1,1,1
1,1,1,1,1,2,1,1,1,1,1,1,1
1,1,1,1,1,2,1,1,1,3,1,1,1
1,6,1,1,1,2,1,1,1,1,1,1,1
1,1,5,5,1,2,2,2,1,1,1,1,1
1,1,1,1,1,1,1,2,1,1,1,1,1
Tileset:
http://puu.sh/9Tv2X/7938b6abf4.png
so when i render all the tiles in the first row in this tileset, numbers 1-4 (since the tileset is 4 tiles long) it works fine but anything past that would get cropped outside the image.
So for that i need the Y position to increase and to reset the X position everytime it goes outside the tileset width which is (32 * (tileNumber - 1)) but my brain cant figure out how to do this.
tile width and height is 32.
my code for drawing the tilset:
void TileMap::Draw()
{
for (unsigned int i = 0; i < mapVector.size(); i++)
{
for (unsigned int j = 0; j < mapVector[i].size(); j++)
{
int tileY = 0;
int tileX = mapVector[i][j] - 1;
if (tileWidth * mapVector[i][j] > tilesetWidth)
{
//now what
}
if (mapVector[i][j] > 0)
{
tileSprite->SetPosition(j * tileWidth, i * tileHeight);
tileSprite->SetTextureRect(tileX * tileWidth, tileY * tileHeight, 32, 32);
tileSprite->Draw();
}
}
}
}

Optimizing a simple 2D Tile engine (+potential bugfix)

Preface
Yes, there is plenty to cover here... but I'll do my best to keep this as well-organized, informative and straight-to-the-point as I possibly can!
Using the HGE library in C++, I have created a simple tile engine.
And thus far, I have implemented the following designs:
A CTile class, representing a single tile within a CTileLayer, containing row/column information as well as an HGE::hgeQuad (which stores vertex, color and texture information, see here for details).
A CTileLayer class, representing a two-dimensional 'plane' of tiles (which are stored as a one-dimensional array of CTile objects), containing the # of rows/columns, X/Y world-coordinate information, tile pixel width/height information, and the layer's overall width/height in pixels.
A CTileLayer is responsible for rendering any tiles which are either fully or partially visible within the boundaries of a virtual camera 'viewport', and to avoid doing so for any tiles which are outside of this visible range. Upon creation, it pre-calculates all information to be stored within each CTile object, so the core of engine has more room to breathe and can focus strictly on the render loop. Of course, it also handles proper deallocation of each contained tile.
Issues
The problem I am now facing essentially boils down to the following architectural/optimization issues:
In my render loop, even though I am not rendering any tiles which are outside of visible range, I am still looping through all of the tiles, which seems to have a major performance impact for larger tilemaps (i.e., any thing above 100x100 rows/columns # 64x64 tile dimensions still drops the framerate down by 50% or more).
Eventually, I intend to create a fancy tilemap editor to coincide with this engine.
However, since I am storing all two-dimensional information inside one or more 1D arrays, I don't have any idea how possible it would be to implement some sort of rectangular-select & copy/paste feature, without some MAJOR performance hit -- involving looping through every tile twice per frame. And yet if I used 2D arrays, there would be a slightly less but more universal FPS drop!
Bug
As stated before... In my render code for a CTileLayer object, I have optimized which tiles are to be drawn based upon whether or not they are within viewing range. This works great, and for larger maps I noticed only a 3-8 FPS drop (compared to a 100+ FPS drop without this optimization).
But I think I'm calculating this range incorrectly, because after scrolling halfway through the map you can start to see a gap (on the topmost & leftmost sides) where tiles aren't being rendered, as if the clipping range is increasing faster than the camera can move (even though they both move at the same speed).
This gap gradually increases in size the further along into the X & Y axis you go, eventually eating up nearly half of the top & left sides of the screen on a large map.
My render code for this is shown below...
Code
//
// [Allocate]
// For pre-calculating tile information
// - Rows/Columns = Map Dimensions (in tiles)
// - Width/Height = Tile Dimensions (in pixels)
//
void CTileLayer::Allocate(UINT numColumns, UINT numRows, float tileWidth, float tileHeight)
{
m_nColumns = numColumns;
m_nRows = numRows;
float x, y;
UINT column = 0, row = 0;
const ULONG nTiles = m_nColumns * m_nRows;
hgeQuad quad;
m_tileWidth = tileWidth;
m_tileHeight = tileHeight;
m_layerWidth = m_tileWidth * m_nColumns;
m_layerHeight = m_tileHeight * m_nRows;
if(m_tiles != NULL) Free();
m_tiles = new CTile[nTiles];
for(ULONG l = 0; l < nTiles; l++)
{
m_tiles[l] = CTile();
m_tiles[l].column = column;
m_tiles[l].row = row;
x = (float(column) * m_tileWidth) + m_offsetX;
y = (float(row) * m_tileHeight) + m_offsetY;
quad.blend = BLEND_ALPHAADD | BLEND_COLORMUL | BLEND_ZWRITE;
quad.tex = HTEXTURE(nullptr); //Replaced for the sake of brevity (in the engine's code, I used a globally allocated texture array and did some random tile generation here)
for(UINT i = 0; i < 4; i++)
{
quad.v[i].z = 0.5f;
quad.v[i].col = 0xFF7F7F7F;
}
quad.v[0].x = x;
quad.v[0].y = y;
quad.v[0].tx = 0;
quad.v[0].ty = 0;
quad.v[1].x = x + m_tileWidth;
quad.v[1].y = y;
quad.v[1].tx = 1.0;
quad.v[1].ty = 0;
quad.v[2].x = x + m_tileWidth;
quad.v[2].y = y + m_tileHeight;
quad.v[2].tx = 1.0;
quad.v[2].ty = 1.0;
quad.v[3].x = x;
quad.v[3].y = y + m_tileHeight;
quad.v[3].tx = 0;
quad.v[3].ty = 1.0;
memcpy(&m_tiles[l].quad, &quad, sizeof(hgeQuad));
if(++column > m_nColumns - 1) {
column = 0;
row++;
}
}
}
//
// [Render]
// For drawing the entire tile layer
// - X/Y = world position
// - Top/Left = screen 'clipping' position
// - Width/Height = screen 'clipping' dimensions
//
bool CTileLayer::Render(HGE* hge, float cameraX, float cameraY, float cameraTop, float cameraLeft, float cameraWidth, float cameraHeight)
{
// Calculate the current number of tiles
const ULONG nTiles = m_nColumns * m_nRows;
// Calculate min & max X/Y world pixel coordinates
const float scalarX = cameraX / m_layerWidth; // This is how far (from 0 to 1, in world coordinates) along the X-axis we are within the layer
const float scalarY = cameraY / m_layerHeight; // This is how far (from 0 to 1, in world coordinates) along the Y-axis we are within the layer
const float minX = cameraTop + (scalarX * float(m_nColumns) - m_tileWidth); // Leftmost pixel coordinate within the world
const float minY = cameraLeft + (scalarY * float(m_nRows) - m_tileHeight); // Topmost pixel coordinate within the world
const float maxX = minX + cameraWidth + m_tileWidth; // Rightmost pixel coordinate within the world
const float maxY = minY + cameraHeight + m_tileHeight; // Bottommost pixel coordinate within the world
// Loop through all tiles in the map
for(ULONG l = 0; l < nTiles; l++)
{
CTile tile = m_tiles[l];
// Calculate this tile's X/Y world pixel coordinates
float tileX = (float(tile.column) * m_tileWidth) - cameraX;
float tileY = (float(tile.row) * m_tileHeight) - cameraY;
// Check if this tile is within the boundaries of the current camera view
if(tileX > minX && tileY > minY && tileX < maxX && tileY < maxY) {
// It is, so draw it!
hge->Gfx_RenderQuad(&tile.quad, -cameraX, -cameraY);
}
}
return false;
}
//
// [Free]
// Gee, I wonder what this does? lol...
//
void CTileLayer::Free()
{
delete [] m_tiles;
m_tiles = NULL;
}
Questions
What can be done to fix those architectural/optimization issues, without greatly impacting any other rendering optimizations?
Why is that bug occurring? How can it be fixed?
Thank you for your time!
Optimising the iterating of the map is fairly straight forward.
Given a visible rect in world coordinates (left, top, right, bottom) it's fairly trivial to work out the tile positions, simply by dividing by the tile size.
Once you have those tile coordinates (tl, tt, tr, tb) you can very easily calculate the first visible tile in your 1D array. (The way you calculate any tile index from a 2D coordinate is (y*width)+x - remember to make sure the input coordinate is valid first though.) You then just have a double for loop to iterate the visible tiles:
int visiblewidth = tr - tl + 1;
int visibleheight = tb - tt + 1;
for( int rowidx = ( tt * layerwidth ) + tl; visibleheight--; rowidx += layerwidth )
{
for( int tileidx = rowidx, cx = visiblewidth; cx--; tileidx++ )
{
// render m_Tiles[ tileidx ]...
}
}
You can use a similar system for selecting a block of tiles. Just store the selection coordinates and calculate the actual tiles in exactly the same way.
As for your bug, why do you have x, y, left, right, width, height for the camera? Just store camera position (x,y) and calculate the visible rect from the dimensions of your screen/viewport along with any zoom factor you have defined.
This is a pseudo codish example, geometry variables are in 2d vectors. Both the camera object and the tilemap has a center-position and a extent (half size). The math is just the same even if you decide to stick with pure numbers. Even if you don't use center coordinates and extent, perhaps you'll get an idea on the math. All of this code is in the render function, and is rather simplified. Also, this example assume you already got a 2D array -like object that holds the tiles.
So, first a full example, and I'll explain each part further down.
// x and y are counters, sx is a placeholder for x start value as x will
// be in the inner loop and need to be reset each iteration.
// mx and my will be the values x and y will count towards too.
x=0,
y=0,
sx=0,
mx=total_number_of_tiles_on_x_axis,
my=total_number_of_tiles_on_y_axis
// calculate the lowest and highest worldspace values of the cam
min = cam.center - cam.extent
max = cam.center + cam.extent
// subtract with tilemap corners and divide by tilesize to get
// the anount of tiles that is outside of the cameras scoop
floor = Math.floor( min - ( tilemap.center - tilemap.extent ) / tilesize)
ceil = Math.ceil( max - ( tilemap.center + tilemap.extent ) / tilesize)
if(floor.x > 0)
sx+=floor.x
if(floor.y > 0)
y+=floor.y
if(ceil.x < 0)
mx+=ceil.x
if(ceil.y < 0)
my+=ceil.y
for(; y<my; y++)
// x need to be reset each y iteration, start value are stored in sx
for(x=sx; x<mx; x++)
// render tile x in tilelayer y
Explained bit by bit. First thing in the render function, we will use a few variables.
// x and y are counters, sx is a placeholder for x start value as x will
// be in the inner loop and need to be reset each iteration.
// mx and my will be the values x and y will count towards too.
x=0,
y=0,
sx=0,
mx=total_number_of_tiles_on_x_axis,
my=total_number_of_tiles_on_y_axis
To prevent rendering all tiles, you need to provide either a camera-like object or information on where the visible area starts and stops (in worldspace if the scene is movable)
In this example I'm providing a camera object to the render function which has a center and an extent stored as 2d vectors.
// calculate the lowest and highest worldspace values of the cam
min = cam.center - cam.extent
max = cam.center + cam.extent
// subtract with tilemap corners and divide by tilesize to get
// the anount of tiles that is outside of the cameras scoop
floor = Math.floor( min - ( tilemap.center - tilemap.extent ) / tilesize)
ceil = Math.ceil( max - ( tilemap.center + tilemap.extent ) / tilesize)
// floor & ceil is 2D vectors
Now, if floor is higher than 0 or ceil is lower than 0 on any axis, it means that there just as many tiles outside of the camera scoop.
// check if there is any tiles outside to the left or above of camera
if(floor.x > 0)
sx+=floor.x// set start number of sx to amount of tiles outside of camera
if(floor.y > 0)
y+=floor.y // set startnumber of y to amount of tiles outside of camera
// test if there is any tiles outisde to the right or below the camera
if(ceil.x < 0)
mx+=ceil.x // then add the negative value to mx (max x)
if(ceil.y < 0)
my+=ceil.y // then add the negative value to my (max y)
A normal render of the tilemap would go from 0 to number of tiles that axis, this using a loop within a loop to account for both axis. But thanks to the above code x and y will always stick to the space within the border of the camera.
// will loop through only the visible tiles
for(; y<my; y++)
// x need to be reset each y iteration, start value are stored in sx
for(x=sx; x<mx; x++)
// render tile x in tilelayer y
Hope this helps!

SDL - drawing 'negative' circles (Fog of War)

I have this 800x600square I want to draw to the screen. I want to 'cut' circles in it (where alpha would be 0). Basically I'm drawing this whole rectangle over a map so in these 'circles' I drew, you can see the map, otherwise you see the grey square
So, I assume you're trying to add fog of war to one of you game?
I had a small demo I made for a local University a few weeks ago to show A* pathfinding, so I thought I could add fog of war to it for you. Here's the results:
Initial map
First, you start with a complete map, totally visible
Fog
Then, I added a surface to cover the entire screen (take note that my map is smaller than the screen, so for this case I just added fog of war on the screen, but if you have scrolling, make sure it covers each map pixel 1:1)
mFogOfWar = SDL_CreateRGBSurface(SDL_HWSURFACE, in_Width, in_Height, 32, 0x00ff0000, 0x0000ff00, 0x000000ff, 0xff000000);
SDL_Rect screenRect = {0, 0, in_Width, in_Height};
SDL_FillRect(mFogOfWar, &screenRect, 0xFF202020);
Then, you need to draw it... I added this call after drawing the game objects and before drawing the UI
DrawSurface(mFogOfWar, 0, 0);
Where
void RenderingManager::DrawSurface(SDL_Surface* in_Surface, int in_X, int in_Y)
{
SDL_Rect Dest = { in_X, in_Y, 0, 0 };
SDL_BlitSurface(in_Surface, NULL, mScreen, &Dest);
}
Which should give you the following result:
"Punch Surface"
I then created a 32 bits .png that looks like this (checkerboard shows alpha)
When rendering my main character, I added this call:
gRenderingManager.RemoveFogOfWar(int(mX) + SPRITE_X_OFFSET, int(mY) + SPRITE_Y_OFFSET);
The offset is only there to center the punch with the sprite, basically, what I'm passing to RemoveFogOfWar is the center of my sprite.
Remove Fog Of War
Now the meat of the fog of war. I did two versions, one where Fog of War is removed permanently and one where the fog of war is reset. My fog of war reset relies on my punch surface to have a contour where the alpha is reset to 0 and the fact that my character moves of less pixels than the contour contains per frame, otherwise I would keep the Rect where my punch was applied and I would refill it before drawing again the new punch.
Since I couldn't find a "multiply" blend with SDL, I decided to write a simple function that iterates on the punch surface and updates the alpha on the fog of war surface. The most important part is to make sure you stay within the bounds of your surfaces, so it takes up most of the code... there might be some crop functions but I didn't bother checking:
void RenderingManager::RemoveFogOfWar(int in_X, int in_Y)
{
const int halfWidth = mFogOfWarPunch->w / 2;
const int halfHeight = mFogOfWarPunch->h / 2;
SDL_Rect sourceRect = { 0, 0, mFogOfWarPunch->w, mFogOfWarPunch->h };
SDL_Rect destRect = { in_X - halfWidth, in_Y - halfHeight, mFogOfWarPunch->w, mFogOfWarPunch->h };
// Make sure our rects stays within bounds
if(destRect.x < 0)
{
sourceRect.x -= destRect.x; // remove the pixels outside of the surface
sourceRect.w -= sourceRect.x; // shrink to the surface, not to offset fog
destRect.x = 0;
destRect.w -= sourceRect.x; // shrink the width to stay within bounds
}
if(destRect.y < 0)
{
sourceRect.y -= destRect.y; // remove the pixels outside
sourceRect.h -= sourceRect.y; // shrink to the surface, not to offset fog
destRect.y = 0;
destRect.h -= sourceRect.y; // shrink the height to stay within bounds
}
int xDistanceFromEdge = (destRect.x + destRect.w) - mFogOfWar->w;
if(xDistanceFromEdge > 0) // we're busting
{
sourceRect.w -= xDistanceFromEdge;
destRect.w -= xDistanceFromEdge;
}
int yDistanceFromEdge = (destRect.y + destRect.h) - mFogOfWar->h;
if(yDistanceFromEdge > 0) // we're busting
{
sourceRect.h -= yDistanceFromEdge;
destRect.h -= yDistanceFromEdge;
}
SDL_LockSurface(mFogOfWar);
Uint32* destPixels = (Uint32*)mFogOfWar->pixels;
Uint32* srcPixels = (Uint32*)mFogOfWarPunch->pixels;
static bool keepFogRemoved = false;
for(int x = 0; x < destRect.w; ++x)
{
for(int y = 0; y < destRect.h; ++y)
{
Uint32* destPixel = destPixels + (y + destRect.y) * mFogOfWar->w + destRect.x + x;
Uint32* srcPixel = srcPixels + (y + sourceRect.y) * mFogOfWarPunch->w + sourceRect.x + x;
unsigned char* destAlpha = (unsigned char*)destPixel + 3; // fetch alpha channel
unsigned char* srcAlpha = (unsigned char*)srcPixel + 3; // fetch alpha channel
if(keepFogRemoved == true && *srcAlpha > 0)
{
continue; // skip this pixel
}
*destAlpha = *srcAlpha;
}
}
SDL_UnlockSurface(mFogOfWar);
}
Which then gave me this with keepFogRemoved = false even after the character had moved around
And this with keepFogRemoved = true
Validation
The important part is really to make sure you don't write outside of your pixel buffer, so watch out with negative offsets or offsets that would bring you out of the width or height. To validate my code, I added a simple call to RemoveFogOfWar when the mouse is clicked and tried corners and edges to make sure I didn't have a "off by one" problem
case SDL_MOUSEBUTTONDOWN:
{
if(Event.button.button == SDL_BUTTON_LEFT)
{
gRenderingManager.RemoveFogOfWar(Event.button.x, Event.button.y);
}
break;
}
Notes
Obviously, you don't need a 32 bits texture for the "punch", but it was the clearest way I could think of to show you how to do it. It could be done using as little as 1 bit per pixel (on / off). You can also add some gradient, and change the
if(keepFogRemoved == true && *srcAlpha > 0)
{
continue; // skip this pixel
}
To something like
if(*srcAlpha > *destAlpha)
{
continue;
}
To keep a smooth blend like this:
3 State Fog of War
I thought I should add this... I added a way to create a 3 state fog of war: visible, seen and fogged.
To do this, I simply keep the SDL_Rect of where I last "punched" the fog of war, and if the alpha is lower than a certain value, I clamp it at that value.
So, by simply adding
for(int x = 0; x < mLastFogOfWarPunchPosition.w; ++x)
{
for(int y = 0; y < mLastFogOfWarPunchPosition.h; ++y)
{
Uint32* destPixel = destPixels + (y + mLastFogOfWarPunchPosition.y) * mFogOfWar->w + mLastFogOfWarPunchPosition.x + x;
unsigned char* destAlpha = (unsigned char*)destPixel + 3;
if(*destAlpha < 0x60)
{
*destAlpha = 0x60;
}
}
}
mLastFogOfWarPunchPosition = destRect;
right before the loop where the fog of war is "punched", I get a fog of war similar to what you could have in games like StarCraft:
Now, since the "seen" fog of war is semi transparent, you will need to tweak your rendering method to properly clip "enemies" that would be in the fog, so you don't see them but you still see the terrain.
Hope this helps!