I have a custom image object, which is organised in tiles of fixed size, disposed in a tile grid. Let's assume, for instance, that the fixed tileSize is 256 pixels (each tile is a subimage of 256x256 pixels), and that my global image is 1000x500 pixels: this results in a tile grid of 4x2 tiles of 256x256 pixels.
Now, given a point in the global image coordinates, this is how I convert to tile coordinates (i.e., the tile position in the tile grid, plus the pixel position inside that pixel):
struct Point {
int x;
int y;
}
Point pGlobal = (...);
// Grid coordinates of the tile containing pGlobal
Point pTileGrid = Point(pGlobal.x / tileSize, pGlobal.y / tileSize);
// Pixel inside that tile that contains pGlobal
Point pTile = Point(pGlobal.x % tileSize, pGlobal.y % tileSize);
The problem is that this is painfully slow. How can I optimize this?
Thank you in advance. If I did not explain myself correctly in any point please ask me to improve my formulation.
Related
I have a question about 2D TileMap Optimization.
I rendered the Tilemap but the speed is too slow (Frame-rate == 50)
I think I can specify tiles to be rendered. So instead of rendering all tiles, just render tiles on Screen (Device).
This is my current method.
//Lower Layer
for(int y = 0; y < Height; ++y)
{
for(int x = 0; x < Width; ++x)
{
//if the tile index number is -1, then this is null tile (non drawing)
if( Layer1[y][x] != -1)
{
// this EPN_CheckCollision() Function is the AABB Collision Function.
// if there 2 Rect value's are collide , then return true.
if(EPN_CheckCollision( EPN_Rect(EPN_Pos(x*32, y*32)-CharacterPos) , EPN_Rect(0-32,0-32,1024+32,768+32) ) )
{
//EPN_Pos structure is consist of (float PosX , float PosY)
EPN_Pos TilePos = EPN_Pos(x * 32, y * 32)-CharacterPos;
//EPN_Rect structure is consist of below members
//float Top X
//float Top Y
//float BottomX (not Width)
//float BottomY (not Height)
EPN_Rect TileRect = EPN_Rect( Layer1[y][x] % 8 * 32, Layer1[y][x] / 8 * 32, Layer1[y][x] % 8 * 32 + 32, Layer1[y][x] / 8 * 32+32);
//Blt is Texture render function.
// 2nd Parameter is Render Pos and 3rd parameter is Render Texture's Rect.
pEPN_TI->Blt("MapTileset", TilePos, TileRect );
}
}
}
This is my TileMapRender Method.
( I use EPN Engine made by directX which is Unknown. So I annotated my code)
I rendered the tilemap that collides with the DeviceScreen ( 1024 * 768 , but for margin)
because I want to render visible tilemap on screen (I do not render tiles out of device screen).
So I Check the AABB Collision each tile and (1024, 768) device Screen, now I only render necessary tiles.
But I think this method has a problem, that it does not render out of screen tiles.
For statement also repeat all maptiles; what a inefficient method...
Maybe my games frame-rate problem is in this method. So may I ask STACK OVERFLOW how I could do this?
Is there another ways to optimize tilemap rendering?
Give me some tips please.
P.S
I'm sorry about my knotty question.
Please excuse my English ability.
You should only be rendering enough tiles to cover the screen, for example if your screen size is 640x480, and your tile size is 16x16 then:
Max tiles on X = (640/16)+2;
Max tiles on Y = (480/16)+2;
Notice how we add 2 for a margin on each side. Next thing we need to do is work out where we are in the tile map, for this we simply divide the camera x position by the tile width.
For example if the camera is at x=500 and y=20 then:
X tile index = 500/16
Y tile index = 20/16
You must also render your tile grid at an offset of 500%16 and 20%16 to account for the "sub tile pixel" scrolling.
For the collision its even easier, you only need to check collision with the tiles the player is on, so:
If the player size is 16x20 pixels and at position 120,200:
X tile index = 120/16
Y tile index = 200/16
Num X tiles to check = 16/16
Num Y tiles to check = 20/16
Hopefully this makes sense.
To render text with OpenGL I take the textured quad approach, I draw a quad for every character that needs to be represented. I store all the character texture information in a single texture, and use glScalef and glTranslatef on the texture matrix to select the correct character in the texture. At first I only put a few characters in the image used to create the texture and it worked well.
Since then I needed to render more characters, like lower case letters. I tried adding more characters, but now my text ends up unaligned and smaller.
Is there a proper way to create character maps, or is my approach all together wrong?
Note: I am using a mono-style font so font dimensions should not be the issue, however I would like to add support for fonts with non-uniform size characters as well.
EDIT: I'm using a vertex buffer for drawing rather than immediate mode.
EDIT 2: The texture containing the character map has 9 rows, each with 11 characters. Since the characters are the same size, I use glscalef on the texture matrix to 1/9 the width of the texture, and 1/11 the height. The VBO defines the quad (0,0),(1,0),(0,1),(1,1) and tex coords (0,0),(1,0),(0,1),(1,1). The nonalignment seems to be due to my transformations not fitting each glyph exactly. How are the optimal bounds for each glyph calculated?
In hopes that this may be useful to others. The optimal glyph bounds can be calculated by first normalizing the pixel offsets of each letter so that they are numbers within the range of 0 and 1. The widths and heights can also be normalized to determine the correct bounding box. If the widths and heights are uniform, like in mono fonts, static width and height values may be used for computing the glyph bounds.
Saving an array of pixel position values for each glyph would be tedious to calculate by hand, so it is better to start the first glyph at the first pixel of the character map and keep no spacing in between each letter. This would make calculating the bottom left uv coordinates easy with for loops
void GetUVs(Vector2* us, Vector2* vs, float charWidth, float charHeight, int cols, int rows)
{
for (int x = 0; x < cols; x++)
{
for (int y = 0; y < rows; y++)
{
int index = x + cols * y;
us[index].x = x * charWidth;
vs[index].y = y * charHeight;
}
}
}
The rest of the bounds could be calculated by adding the width, the height, and the width and height respectively.
perspectiveCamera = new PerspectiveCamera(90, 80, 48);
perspectiveCamera.position.set(0,0, 10f);
perspectiveCamera.lookAt(0,0,0);
perspectiveCamera.near = .01f;
perspectiveCamera.far = 300f;
My ScreenWidth x ScreenHeight are 800 x 480;
pCamera.unproject(mytouchPoint) shall suppose to give results between
x = 0 to 80
y = 0 to 48
but I m getting 0.000xyz for both x and y axis
Don't use such a small value for your camera's near member, it will cause floating point errors and/or z-fighting.
The width and height values you provided to PerspectiveCamera constructor, are used to calculate the aspect ratio. There is no single 2D resolution (the size of the screen-plane in world coordinates) in a 3D perspective.
You cannot simply unproject a 2D screen coordinate to a single 3D coordinate. For each 2D screen coordinate, there are an "infinite" amount of 3D coordinates possible. Therefor the unproject method of the camera will use the z-coordinate of the provided screen coordinate to decide which of those 3D coordinates to return. If z is zero, it will give the coordinate on the near-plane. If z is one, it will give the coordinate on the far-plane.
Assuming you used z=0 for myTouchPoint and given you have a very small near-plane (since you near value is very small), the unprojected value will be vary small and therefor (almost) equal to zero.
For more information, you might want to have a look at: http://blog.xoppa.com/interacting-with-3d-objects/
I found a way to easily do it. Its fast too.
Just create a plane at required depth z and find intersection of ray on it.
float zDepth=-10;//your decision or and object z position
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
Ray ray = camera.getPickRay(screenX,screenY);
Plane plane=new Plane();
plane.set(0,0,1,0);// the xy plane with direction z facing screen
plane.d=zDepth;//***** the depth in 3d for the coordinates
Vector3 yourVector3Position=new Vector3();
Intersector.intersectRayPlane(ray, plane, yourVector3Position);
}
I am using the D3DXSPRITE method to draw my map tiles to the screen, i just added a zoom function which zooms in when you hold the up arrow, but noticed you can now see gaps between the tiles, here's some screen shots
normal size (32x32) per tile
zoomed in (you can see white gaps between the tiles)
zoomed out (even worst!)
Here's the code snipplet which I translate and scale the world with.
D3DXMATRIX matScale, matPos;
D3DXMatrixScaling(&matScale, zoom_, zoom_, 0.0f);
D3DXMatrixTranslation(&matPos, xpos_, ypos_, 0.0f);
device_->SetTransform(D3DTS_WORLD, &(matPos * matScale));
And this is my drawing of the map, (tiles are in a vector of a vector of tiles.. and I haven't done culling yet)
LayerInfo *p_linfo = NULL;
RECT rect = {0};
D3DXVECTOR3 pos;
pos.x = 0.0f;
pos.y = 0.0f;
pos.z = 0.0f;
for (short y = 0;
y < BottomTile(); ++y)
{
for (short x = 0;
x < RightTile(); ++x)
{
for (int i = 0; i < TILE_LAYER_COUNT; ++i)
{
p_linfo = tile_grid_[y][x].Layer(i);
if (p_linfo->Visible())
{
p_linfo->GetTextureRect(&rect);
sprite_batch->Draw(
p_engine_->GetTexture(p_linfo->texture_id),
&rect, NULL, &pos, 0xFFFFFFFF);
}
}
pos.x += p_engine_->TileWidth();
}
pos.x = 0;
pos.y += p_engine_->TileHeight();
}
Your texture indices are wrong. 0,0,32,32 is not the correct value- it should be 0,0,31,31. A zero-based index into your texture atlas of 256 pixels would yield values of 0 to 255, not 0 to 256, and a 32x32 texture should yield 0,0,31,31. In this case, the colour of the incorrect pixels depends on the colour of the next texture along the right and the bottom.
That's the problem of magnification and minification. Your textures should have invisible border populated with part of adjacent texture. Then magnification and minification filters will use that border to calculate color of edge pixels rather than default (white) color.
I think so.
I also had a similar problem with texture mapping. What worked for me was changing the texture address mode in the sampler state description; texture address mode is used to control what direct3d does with texture coordinates outside of the ([0.0f, 1.0f]) range: i changed the ADDRESS_U, ADDRESS_V, ADDRESS_W members to D3D11_TEXTURE_ADDRESS_CLAMP which basically clamps all out-of-range values for the texture coordinates into the [0.0f, 1.0f] range.
After a long time searching and testing people solutions I found this rules are the most complete rules that I've ever read.
pixel-perfect-2d from Official Unity WebSite
plus with my own experience i found out that if sprite PPI is 72(for example), you should try to use more PPI for that Image(96 maybe or more).It actually make sprite more dense and make no space for white gaps to show up.
Welcome to the world of floating-point. Those gaps exist due to imperfections using floating-point numbers.
You might be able to improve the situation by being really careful when doing your floating-point math but those seams will be there unless you make one whole mesh out of your terrain.
It's the rasterizer that given the view and projection matrix as well as the vertex positions is slightly off. You maybe able to improve on that but I don't know how successful you'll be.
Instead of drawing different quads you can index only the visible vertexes that make up your terrain and instead use texture tiling techniques to paint different stuff on there. I believe that won't get you the ugly seam because in that case, there technically isn't one.
Hey.
I'm going to write a simple ship shooting game as CG homework, so I'm planning to use some map system (though there's no need for that, it'll be an "extra") and I have no clue about how to represent a map and show it 'in parts', I mean, not all map will be visible in a single frame.
How do people usually work with that?
Consider using a QuadTree. It breaks your map down into small components based by area. It lets you define how much space you want to see, which makes it ideal for zooming in and out, or panning around.
There's a C# implementation you could probably adapt to be C++ fairly easily.
You could also use tiles and a fixed size 2D array of pointers to tiles. The pointers are so you can reuse tiles in multiple places.
You might want to do it like how early video game hardware did: store a 2D array of bytes, where each byte is the index of an 8x8 pixel "tile". Elsewhere store an array of 8x8 pixel tiles. Also store the offset in pixels from the upper-left corner of the map to the upper-left corner of the screen.
When it is time to draw the map, render only the tiles that should be visible on the screen, according to the offset in pixels from the map corner to the screen corner:
int tile_x = pixel_x / tile_width;
int tile_y = pixel_y / tile_height;
for( int y=tile_y; y<=tile_y+screen_height_in_tiles; ++y )
{
for( int x=tile_x; x<=tile_x+screen_width_in_tiles; ++x )
{
int screen_x = tile_x * tile_width - ( pixel_x % tile_width );
int screen_y = tile_y * tile_height - ( pixel_y % tile_height );
render_tile( screen_x, screen_y, map[y][x] );
}
}
This code is not optimally fast, and some logic is missing, such as how to deal with a map that is partially scrolled off the screen. But it's a start.