I'm generating a terrain from a .bmp file, as a very early precursor for a strategy game. In my code I load the BMP file as an openGL texture, then using a double loop to generate coordinates (x, y redChannel). Then I create indices by again double looping and generating the triangles for a square between (x,y) to (x+1, y+1). However, when I run the code, I end up with an extra triangle going from the end of one line to the beginning of the next line, and which I cannot seem to solve. This only happens when I use varied heights and a sufficiently large map, or at least it is not visible otherwise.
This is the code:
void Map::setupVertices(GLsizei* &sizeP, GLint * &vertexArray, GLubyte* &colorArray){
//textureNum is the identifier generated by glGenTextures
GLuint textureNum = loadMap("heightmap.bmp");
//Bind the texture again, and extract the needed data
glBindTexture(GL_TEXTURE_2D, textureNum);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
GLint i = height*width;
GLubyte * imageData = new GLubyte[i+1];
glGetTexImage(GL_TEXTURE_2D,0,GL_RED, GL_UNSIGNED_BYTE, &imageData[0]);
//Setup varibles: counter (used for counting vertices)
//VertexArray: pointer to address for storing the vertices. Size: 3 ints per point, width*height points total
//ColorArray: pointer to address for storing the color data. 3 bytes per point.
int counter = 0;
vertexArray = new GLint[height*width*3];
colorArray = new GLubyte[height*width*3];
srand(time(NULL));
//Loop through rows
for (int y = 0; y < height; y++){
//Loop along the line
for (int x=0; x < width; x++){
//Add vertices: x, y, redChannel
//Add colordata: the common-color.
colorArray[counter] = imageData[x+y*width];
vertexArray[counter++] = x;
colorArray[counter] = imageData[x+y*width];
vertexArray[counter++] = y;
colorArray[counter] = imageData[x+y*width];//(float) (rand() % 255);
vertexArray[counter++] = (float)imageData[x+y*width] /255 * maxHeight;
}
}
//"Return" total vertice amount
sizeP = new GLsizei(counter);
}
void Map::setupIndices(GLsizei* &sizeP, GLuint* &indexArray){
//Pointer to location for storing indices. Size: 2 triangles per square, 3 points per triangle, width*height triangles
indexArray = new GLuint[width*height*2*3];
int counter = 0;
//Loop through rows, don't go to top row (because those triangles are to the row below)
for (int y = 0; y < height-1; y++){
//Loop along the line, don't go to last point (those are connected to second last point)
for (int x=0; x < width-1; x++){
//
// TL___TR
// | / |
// LL___LR
int lowerLeft = x + width*y;
int lowerRight = lowerLeft+1;
int topLeft = lowerLeft + width+1;
int topRight = topLeft + 1;
indexArray[counter++] = lowerLeft;
indexArray[counter++] = lowerRight;
indexArray[counter++] = topLeft;
indexArray[counter++] = topLeft;
indexArray[counter++] = lowerRight;
indexArray[counter++] = topRight;
}
}
//"Return" the amount of indices
sizeP = new GLsizei(counter);
}
I eventually draw this with this code:
void drawGL(){
glPushMatrix();
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_INT,0,mapHeight);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(3,GL_UNSIGNED_BYTE,0,mapcolor);
if (totalIndices != 0x00000000){
glDrawElements(GL_TRIANGLES, *totalIndices, GL_UNSIGNED_INT, indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glPopMatrix();
}
Here's a picture of the result:
http://s22.postimg.org/k2qoru3kx/open_GLtriangles.gif
And with only blue lines and black background.
http://s21.postimg.org/5yw8sz5mv/triangle_Error_Blue_Line.gif
There also appears to be one of these going in the other direction as well, at the very edge right, but I'm supposing for now that it may be related to the same issue.
I'd simplify this part:
int lowerLeft = x + width * y;
int lowerRight = (x + 1) + width * y;
int topLeft = x + width * (y + 1);
int topRight = (x + 1) + width * (y + 1);
The problem looks like topLeft has an extra + 1 when it should only have the + width.
This causes the "top" vertices to both be shifted along by one column. You might not notice the offsets within the grid and, as you pointed out, they're not visible until the height changes.
Also, returning new GLsizei(counter) seems a bit round about. Why not just pass in GLsizei& counter.
These might be worth a look too. You can save a fair bit of data using strip primitives for many procedural objects:
Generate a plane with triangle strips
triangle-strip-for-grids-a-construction
Related
I'm working on a small "game" like project as a practice, and I've managed to get my framerate down to not even 3 FPS. While the only thing that is being drawn is screen filling tiles and a minimap.
Now I've found that the problem is with the minimap, without it caps at 60 FPS. But unfortunately I'm not experienced enough to find out what the real problem is with it...
My draw function:
void StateIngame::draw()
{
m_gui.removeAllWidgets();
m_window.setView(m_view);
// Frame counter
float ct = m_clock.restart().asSeconds();
float fps = 1.f / ct;
m_time = ct;
char c[10];
sprintf(c, "%f", fps);
std::string fpsStr(c);
sf::String str(fpsStr);
auto fpsText = tgui::Label::create();
fpsText->setText(str);
fpsText->setTextSize(16);
fpsText->setPosition(15, 15);
m_gui.add(fpsText);
//////////////
// Draw map //
//////////////
int camOffsetX, camOffsetY;
int tileSize = m_map.getTileSize();
Tile tile;
sf::IntRect bounds = m_camera.getTileBounds(tileSize);
camOffsetX = m_camera.getTileOffset(tileSize).x;
camOffsetY = m_camera.getTileOffset(tileSize).y;
// Loop and draw each tile
// x and y = counters, tileX and tileY is the coordinates of the tile being drawn
for (int y = 0, tileY = bounds.top; y < bounds.height; y++, tileY++)
{
for (int x = 0, tileX = bounds.left; x < bounds.width; x++, tileX++)
{
try {
// Normal view
m_window.setView(m_view);
tile = m_map.getTile(tileX, tileY);
tile.render((x * tileSize) - camOffsetX, (y * tileSize) - camOffsetY, &m_window);
} catch (const std::out_of_range& oor)
{}
}
}
bounds = sf::IntRect(bounds.left - (bounds.width * 2), bounds.top - (bounds.height * 2), bounds.width * 4, bounds.height * 4);
for (int y = 0, tileY = bounds.top; y < bounds.height; y++, tileY++)
{
for (int x = 0, tileX = bounds.left; x < bounds.width; x++, tileX++)
{
try {
// Mini map
m_window.setView(m_minimap);
tile = m_map.getTile(tileX, tileY);
sf::RectangleShape miniTile(sf::Vector2f(4, 4));
miniTile.setFillColor(tile.m_color);
miniTile.setPosition((x * (tileSize / 4)), (y * (tileSize / 4)));
m_window.draw(miniTile);
} catch (const std::out_of_range& oor)
{}
}
}
// Gui
m_window.setView(m_view);
m_gui.draw();
}
The Tile class has a variable of type sf::Color which is set during map generating. This color is then used to draw the minimap instead of the 16x16 texture that is used for the map itself.
So when I leave out the minimap drawing, and only draw the map itself, it's more fluid than I could wish for...
Any help is appreciated!
You are generating the view completly new for every frame. Do this once on startup should be enought.
If I have a PixelBuffer object of size (200 * 200 * 3) where each pixel has three consecutive spots for the RGB colors. How can I index them so that if i am trying to implement the DDA line drawing algorithm. I have seen a lot on the web that uses PutPixel(x,y) but im not sure how I can access the pixels in this method.
The pixels will be arranged row by row, with each pixel using 3 bytes. To address a point (x, y), you basically just need to multiply the y value by the size of a row (which is the width multiplied by 3), multiply the x value by the size of a pixel (3).
With a few constants for readability, the code for the function could look like this:
const int IMG_WIDTH = 200;
const int IMG_HEIGHT = 200;
const int BYTES_PER_PIXEL = 3;
const int BYTES_PER_ROW = IMG_WIDTH * BYTES_PER_PIXEL;
void PutPixel(uint8_t* pImgData, int x, int y, const uint8_t color[3])
{
uint8_t pPixel = pImgData + y * BYTES_PER_ROW + x * BYTES_PER_PIXEL;
for (int iByte = 0; iByte < BYTES_PER_PIXEL; ++iByte)
{
pPixel[iByte] = color[iByte];
}
}
Example how this function could be used:
// Allocate image data.
uint8_t* pImgData = new uint8_t[IMG_WIDTH * IMG_HEIGHT];
// Initialize image data, unless you are planning to set all pixels.
...
// Set pixel (50, 30) to yellow.
uint8_t yellow[3] = {255, 255, 0};
PutPixel(pImgData, 50, 30, yellow);
Once you have your image built in memory, you can store the content in a pixel buffer object using glBufferData():
GLuint bufId = 0;
glGenBuffers(1, &bufId);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, bufId);
glBufferData(GL_PIXEL_UNPACK_BUFFER, IMG_HEIGHT * BYTES_PER_ROW,
pImgData, GL_STATIC_DRAW);
I want to repeat small 2x2 pixels texture on a bigger quad, for instance, 50x50 pixels.
Set vertices -
float X = 100, Y = 100, Width = 50, Height = 50;
float TextureLeft = 0, TextureTop = 0, TextureRight = 25, TextureBottom = 25;
Vertices[0].x = X;
Vertices[0].y = Y + Height;
Vertices[0].z = 0;
Vertices[0].rhw = 1;
Vertices[0].tu = TextureLeft;
Vertices[0].tv = TextureBottom;
Vertices[1].x = X;
Vertices[1].y = Y;
Vertices[1].z = 0;
Vertices[1].rhw = 1;
Vertices[1].tu = TextureLeft;
Vertices[1].tv = TextureTop;
Vertices[2].x = X + Width;
Vertices[2].y = Y;
Vertices[2].z = 0;
Vertices[2].rhw = 1;
Vertices[2].tu = TextureRight;
Vertices[2].tv = TextureTop;
Vertices[3].x = X;
Vertices[3].y = Y + Height;
Vertices[3].z = 0;
Vertices[3].rhw = 1;
Vertices[3].tu = TextureLeft;
Vertices[3].tv = TextureBottom;
Vertices[4].x = X + Width;
Vertices[4].y = Y;
Vertices[4].z = 0;
Vertices[4].rhw = 1;
Vertices[4].tu = TextureRight;
Vertices[4].tv = TextureTop;
Vertices[5].x = X + Width;
Vertices[5].y = Y + Height;
Vertices[5].z = 0;
Vertices[5].rhw = 1;
Vertices[5].tu = TextureRight;
Vertices[5].tv = TextureBottom;
Draw -
DrawPrimitive(D3DPT_TRIANGLELIST, 0, 6);
Problem is "glitch" in the edge between the triangles, probably because of wrong vertices coordinates and also "glitch" on quad borders.
Original texture - http://i.imgur.com/tNqYePs.png
Result - http://i.imgur.com/sgUZvqE.png
Before the call to DrawPrimitive you should setup the texture wrapping as in this article.
// For the textures other than the first one use "D3DVERTEXTEXTURESAMPLER0+index"
YourDevice->SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP);
YourDevice->SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP);
To eliminate the glitch at the diagonal you may use the single Quad instead of two triangles.
The problem at the edges is considered here. You have to add small offset to each texture coordinate. "Small" means a normalized half of the pixel. If your texture resolution is 512x512, then add (0.5/512.0) to each of the u/v coords.
If you draw 2d, you must add 0.5px to U and V coordianates when texturing. This will give you exact pixel/texel precision. Otherwise you will lose 0.5 pixels every time and texture will look blurry.
I'm working on just making uniformly colors spheres for a project and I'm running into an issue. The spheres run fine but when I try to color them with glColorPointer they stop appearing. OpenGL isn't showing any errors when I call glGetError so I'm at a loss for why this would happen.
The code to generate the vertices, colors etc:
void SphereObject::setupVertices()
{
//determine the array sizes
//vertices per row (+1 for the repeated one at the end) * three for each coordinate
//times the number of rows
int arraySize = myNumVertices * 3;
myNumIndices = (myVerticesPerRow + 1) * myRows * 2;
myVertices = new GLdouble[arraySize];
myIndices = new GLuint[myNumIndices];
myNormals = new GLdouble[arraySize];
myColors = new GLint[myNumVertices * 4];
//use spherical coordinates to calculate the vertices
double phiIncrement = 360 / myVerticesPerRow;
double thetaIncrement = 180 / (double)myRows;
int arrayIndex = 0;
int colorArrayIndex = 0;
int indicesIndex = 0;
double x, y, z = 0;
for(double theta = 0; theta <= 180; theta += thetaIncrement)
{
//loop including the repeat for the last vertex
for(double phi = 0; phi <= 360; phi += phiIncrement)
{
//make sure that the last vertex is repeated
if(360 - phi < phiIncrement)
{
x = myRadius * sin(radians(theta)) * cos(radians(0));
y = myRadius * sin(radians(theta)) * sin(radians(0));
z = myRadius * cos(radians(theta));
}
else
{
x = myRadius * sin(radians(theta)) * cos(radians(phi));
y = myRadius * sin(radians(theta)) * sin(radians(phi));
z = myRadius * cos(radians(theta));
}
myColors[colorArrayIndex] = myColor.getX();
myColors[colorArrayIndex + 1] = myColor.getY();
myColors[colorArrayIndex + 2] = myColor.getZ();
myColors[colorArrayIndex + 3] = 1;
myVertices[arrayIndex] = x;
myVertices[arrayIndex + 1] = y;
myVertices[arrayIndex + 2] = z;
if(theta <= 180 - thetaIncrement)
{
myIndices[indicesIndex] = arrayIndex / 3;
myIndices[indicesIndex + 1] = (arrayIndex / 3) + myVerticesPerRow + 1;
indicesIndex += 2;
}
arrayIndex += 3;
colorArrayIndex += 4;
}
}
}
And the code to actually render the thing
void SphereObject::render()
{
glPushMatrix();
glPushClientAttrib(GL_CLIENT_VERTEX_ARRAY_BIT);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_INT, 0, myColors);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_DOUBLE, 0, myVertices);
glDrawElements(GL_QUAD_STRIP, myNumIndices, GL_UNSIGNED_INT, myIndices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glPopClientAttrib();
glPopMatrix();
}
Any and all help would be appreciated. I'm really having a hard time for some reason.
When you use GL_INT (or any integer type) for color pointer, it linearly maps the largest possible integer value to 1.0f (maximum color), and 0 to 0.0f (minimum color).
Therefore unless your values of RGB and A are in the billions, they will likely appear completely black (or transparent if that's enabled). I see that you've got alpha = 1, which will essentially be zero after conversion to float.
I am trying to load a HeightmapTerrainShape in OgreBullet by (mostly) using the demo code, but my terrain mesh is offset from the HeightmapTerrainShape. I have no clue why this is happening. This is my code:
void TerrainLoader::setTerrainPhysics(Ogre::Image *imgPtr)
{
unsigned page_size = terrainGroup->getTerrainSize();
Ogre::Vector3 terrainScale(4096 / (page_size-1), 600, 4096 / (page_size-1));
float *heights = new float[page_size*page_size];
for(unsigned y = 0; y < page_size; ++y)
{
for(unsigned x = 0; x < page_size; ++x)
{
Ogre::ColourValue color = imgPtr->getColourAt(x, y, 0);
heights[x + y * page_size] = color.r;
}
}
OgreBulletCollisions::HeightmapCollisionShape *terrainShape = new OgreBulletCollisions::HeightmapCollisionShape(
page_size,
page_size,
terrainScale,
heights,
true
);
OgreBulletDynamics::RigidBody *terrainBody = new OgreBulletDynamics::RigidBody(
"Terrain",
OgreInit::level->physicsManager->getWorld()
);
imgPtr = NULL;
Ogre::Vector3 terrainShiftPos(terrainScale.x/(page_size-1), 0, terrainScale.z/(page_size-1));
terrainShiftPos.y = terrainScale.y / 2 * terrainScale.y;
Ogre::SceneNode *pTerrainNode = OgreInit::sceneManager->getRootSceneNode()->createChildSceneNode();
terrainBody->setStaticShape(pTerrainNode, terrainShape, 0.0f, 0.8f, terrainShiftPos);
//terrainBody->setPosition(terrainBody->getWorldPosition()-Ogre::Vector3(0.005, 0, 0.005));
OgreInit::level->physicsManager->addBody(terrainBody);
OgreInit::level->physicsManager->addShape(terrainShape);
}
This is what it looks like with the debug drawer turned on:
My world is 4096*600*4096 in size, and each chunk is 64*600*64
heights[x + y * page_size] = color.r;
This Line gives you negative values. If you combine negative terrain height values with ogre bullet terrain, you get a wrong bounding box conversation.
You need to use the intervall 0-1 for height values.
Had the same problem with perlin noise filter that gives you values from -1 to 1.