Texture repeating - c++

I want to repeat small 2x2 pixels texture on a bigger quad, for instance, 50x50 pixels.
Set vertices -
float X = 100, Y = 100, Width = 50, Height = 50;
float TextureLeft = 0, TextureTop = 0, TextureRight = 25, TextureBottom = 25;
Vertices[0].x = X;
Vertices[0].y = Y + Height;
Vertices[0].z = 0;
Vertices[0].rhw = 1;
Vertices[0].tu = TextureLeft;
Vertices[0].tv = TextureBottom;
Vertices[1].x = X;
Vertices[1].y = Y;
Vertices[1].z = 0;
Vertices[1].rhw = 1;
Vertices[1].tu = TextureLeft;
Vertices[1].tv = TextureTop;
Vertices[2].x = X + Width;
Vertices[2].y = Y;
Vertices[2].z = 0;
Vertices[2].rhw = 1;
Vertices[2].tu = TextureRight;
Vertices[2].tv = TextureTop;
Vertices[3].x = X;
Vertices[3].y = Y + Height;
Vertices[3].z = 0;
Vertices[3].rhw = 1;
Vertices[3].tu = TextureLeft;
Vertices[3].tv = TextureBottom;
Vertices[4].x = X + Width;
Vertices[4].y = Y;
Vertices[4].z = 0;
Vertices[4].rhw = 1;
Vertices[4].tu = TextureRight;
Vertices[4].tv = TextureTop;
Vertices[5].x = X + Width;
Vertices[5].y = Y + Height;
Vertices[5].z = 0;
Vertices[5].rhw = 1;
Vertices[5].tu = TextureRight;
Vertices[5].tv = TextureBottom;
Draw -
DrawPrimitive(D3DPT_TRIANGLELIST, 0, 6);
Problem is "glitch" in the edge between the triangles, probably because of wrong vertices coordinates and also "glitch" on quad borders.
Original texture - http://i.imgur.com/tNqYePs.png
Result - http://i.imgur.com/sgUZvqE.png

Before the call to DrawPrimitive you should setup the texture wrapping as in this article.
// For the textures other than the first one use "D3DVERTEXTEXTURESAMPLER0+index"
YourDevice->SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP);
YourDevice->SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP);
To eliminate the glitch at the diagonal you may use the single Quad instead of two triangles.
The problem at the edges is considered here. You have to add small offset to each texture coordinate. "Small" means a normalized half of the pixel. If your texture resolution is 512x512, then add (0.5/512.0) to each of the u/v coords.

If you draw 2d, you must add 0.5px to U and V coordianates when texturing. This will give you exact pixel/texel precision. Otherwise you will lose 0.5 pixels every time and texture will look blurry.

Related

How to downsample a not-power-of-2 texture in UnrealEngine?

I am rendering the Viewport with a resolution of something like 1920x1080 multiplied by a Oversampling value like 4. Now i need to downsample from the rendered Resolution 7680‬x4320 back to the 1920x1080.
Are there any functions in Unreal I could use for that ? Or any Library (windows only) which handle this nicely ?
Or what would be a propper way of writing this my own ?
We tried to implement a downsampling but it only works if SnapshotScale is 2, when its higher than 2 it doesn't seem to have an effect regarding image quality.
UTexture2D* AAVESnapShotManager::DownsampleTexture(UTexture2D* Texture)
{
UTexture2D* Result = UTexture2D::CreateTransient(RenderSettings.imageWidth, RenderSettings.imageHeight, PF_B8G8R8A8);
void* TextureDataVoid = Texture->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_ONLY);
void* ResultDataVoid = Result->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_WRITE);
FColor* TextureData = (FColor*)TextureDataVoid;
FColor* ResultData = (FColor*)ResultDataVoid;
int32 WindowSize = RenderSettings.resolutionScale / 2;
for (int x = 0; x < Result->GetSizeX(); ++x)
{
for (int y = 0; y < Result->GetSizeY(); ++y)
{
const uint32 ResultIndex = y * Result->GetSizeX() + x;
uint32_t R = 0, G = 0, B = 0, A = 0;
int32 Samples = 0;
for (int32 dx = -WindowSize; dx < WindowSize; ++dx)
{
for (int32 dy = -WindowSize; dy < WindowSize; ++dy)
{
int32 PosX = (x * RenderSettings.resolutionScale + dx);
int32 PosY = (y * RenderSettings.resolutionScale + dy);
if (PosX < 0 || PosX >= Texture->GetSizeX() || PosY < 0 || PosY >= Texture->GetSizeY())
{
continue;
}
size_t TextureIndex = PosY * Texture->GetSizeX() + PosX;
FColor& Color = TextureData[TextureIndex];
R += Color.R;
G += Color.G;
B += Color.B;
A += Color.A;
++Samples;
}
}
ResultData[ResultIndex] = FColor(R / Samples, G / Samples, B / Samples, A / Samples);
}
}
Texture->PlatformData->Mips[0].BulkData.Unlock();
Result->PlatformData->Mips[0].BulkData.Unlock();
Result->UpdateResource();
return Result;
}
I expect a high quality oversampled Texture output, working with any positive int value in SnapshotScale.
I have a suggestion. It's not really direct, but it involves no writing of image filtering or importing of libraries.
Make an unlit Material with nodes TextureObject->TextureSample-> connect to Emissive.
Use the texture you start with in your function to populate the Texture Object on a Material Instance Dynamic of the material.
Use the "Draw Material to Render Target" function to draw the Material Instance Dynamic to a Render Target that is pre-set with your target resolution.

Texture mapping so close but not quite right

I am building a raytracer and my texture mapping isn't quite right. Its very close though. I build a cup in blender and did a UV unwrap to display a texture. I exported the object and loaded it into my raytracer with the same texture. Here are two pictures:
As you can see the textures look very close, but something is off. If you look at the bottom of the cup on the sides you can see they aren't the same, but the textures are all aligned correctly so it does look somewhat right. The way the textures are calculated is by using barycentric coordinates.
Vect n = getTriangleNormal();
Vect ba = B.add(A.negative()).negative();
Vect ca = C.add(A.negative()).negative();
Vect ap = A.add(point.negative()).negative();
Vect bp = B.add(point.negative()).negative();
Vect cp = C.add(point.negative()).negative();
double areaABC = n.dotProduct(ba.crossProduct(ca));
double areaPBC = n.dotProduct(bp.crossProduct(cp));
double areaPCA = n.dotProduct(cp.crossProduct(ap));
if(areaABC < 0){areaABC = -areaABC;}
if(areaPBC < 0){areaPBC = -areaPBC;}
if(areaPCA < 0){areaPCA = -areaPCA;}
double u = areaPBC / areaABC ; // alpha
double v = areaPCA / areaABC ; // beta
double w = 1.0f - u - v ; // gamma
Then to find the color I take the new point and map it onto the image
Vect uv = (textA.mult(u)).add(textB.mult(v)).add(textC.mult(w));
int width = texture ->columns();
int height = texture ->rows();
double x = width * (uv.getX()) ; x = (int) x;
double y = height * (1-uv.getY()) ; y = (int) y;
//vector<unsigned int> c = texture -> getPixel(x,y);
//return Color(c[0]/255.0,c[1]/255.0,c[2]/255.0,0);
int row = y;
int column = x;
Magick::PixelPacket *pixels = texture->getPixels(0, 0, width, height);
Magick::Color color = pixels[width * row + column];
double range = pow(2, texture -> modulusDepth());
double r = color.redQuantum()/range ;
double g = color.greenQuantum()/range ;
double b = color.blueQuantum()/range ;
return Color(r, g, b, 0);

openGL drawElements - one extra triangle, using index array?

I'm generating a terrain from a .bmp file, as a very early precursor for a strategy game. In my code I load the BMP file as an openGL texture, then using a double loop to generate coordinates (x, y redChannel). Then I create indices by again double looping and generating the triangles for a square between (x,y) to (x+1, y+1). However, when I run the code, I end up with an extra triangle going from the end of one line to the beginning of the next line, and which I cannot seem to solve. This only happens when I use varied heights and a sufficiently large map, or at least it is not visible otherwise.
This is the code:
void Map::setupVertices(GLsizei* &sizeP, GLint * &vertexArray, GLubyte* &colorArray){
//textureNum is the identifier generated by glGenTextures
GLuint textureNum = loadMap("heightmap.bmp");
//Bind the texture again, and extract the needed data
glBindTexture(GL_TEXTURE_2D, textureNum);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
GLint i = height*width;
GLubyte * imageData = new GLubyte[i+1];
glGetTexImage(GL_TEXTURE_2D,0,GL_RED, GL_UNSIGNED_BYTE, &imageData[0]);
//Setup varibles: counter (used for counting vertices)
//VertexArray: pointer to address for storing the vertices. Size: 3 ints per point, width*height points total
//ColorArray: pointer to address for storing the color data. 3 bytes per point.
int counter = 0;
vertexArray = new GLint[height*width*3];
colorArray = new GLubyte[height*width*3];
srand(time(NULL));
//Loop through rows
for (int y = 0; y < height; y++){
//Loop along the line
for (int x=0; x < width; x++){
//Add vertices: x, y, redChannel
//Add colordata: the common-color.
colorArray[counter] = imageData[x+y*width];
vertexArray[counter++] = x;
colorArray[counter] = imageData[x+y*width];
vertexArray[counter++] = y;
colorArray[counter] = imageData[x+y*width];//(float) (rand() % 255);
vertexArray[counter++] = (float)imageData[x+y*width] /255 * maxHeight;
}
}
//"Return" total vertice amount
sizeP = new GLsizei(counter);
}
void Map::setupIndices(GLsizei* &sizeP, GLuint* &indexArray){
//Pointer to location for storing indices. Size: 2 triangles per square, 3 points per triangle, width*height triangles
indexArray = new GLuint[width*height*2*3];
int counter = 0;
//Loop through rows, don't go to top row (because those triangles are to the row below)
for (int y = 0; y < height-1; y++){
//Loop along the line, don't go to last point (those are connected to second last point)
for (int x=0; x < width-1; x++){
//
// TL___TR
// | / |
// LL___LR
int lowerLeft = x + width*y;
int lowerRight = lowerLeft+1;
int topLeft = lowerLeft + width+1;
int topRight = topLeft + 1;
indexArray[counter++] = lowerLeft;
indexArray[counter++] = lowerRight;
indexArray[counter++] = topLeft;
indexArray[counter++] = topLeft;
indexArray[counter++] = lowerRight;
indexArray[counter++] = topRight;
}
}
//"Return" the amount of indices
sizeP = new GLsizei(counter);
}
I eventually draw this with this code:
void drawGL(){
glPushMatrix();
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_INT,0,mapHeight);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(3,GL_UNSIGNED_BYTE,0,mapcolor);
if (totalIndices != 0x00000000){
glDrawElements(GL_TRIANGLES, *totalIndices, GL_UNSIGNED_INT, indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glPopMatrix();
}
Here's a picture of the result:
http://s22.postimg.org/k2qoru3kx/open_GLtriangles.gif
And with only blue lines and black background.
http://s21.postimg.org/5yw8sz5mv/triangle_Error_Blue_Line.gif
There also appears to be one of these going in the other direction as well, at the very edge right, but I'm supposing for now that it may be related to the same issue.
I'd simplify this part:
int lowerLeft = x + width * y;
int lowerRight = (x + 1) + width * y;
int topLeft = x + width * (y + 1);
int topRight = (x + 1) + width * (y + 1);
The problem looks like topLeft has an extra + 1 when it should only have the + width.
This causes the "top" vertices to both be shifted along by one column. You might not notice the offsets within the grid and, as you pointed out, they're not visible until the height changes.
Also, returning new GLsizei(counter) seems a bit round about. Why not just pass in GLsizei& counter.
These might be worth a look too. You can save a fair bit of data using strip primitives for many procedural objects:
Generate a plane with triangle strips
triangle-strip-for-grids-a-construction

OgreBullet incorrect HeightmapCollisionShape shape scale?

I am trying to load a HeightmapTerrainShape in OgreBullet by (mostly) using the demo code, but my terrain mesh is offset from the HeightmapTerrainShape. I have no clue why this is happening. This is my code:
void TerrainLoader::setTerrainPhysics(Ogre::Image *imgPtr)
{
unsigned page_size = terrainGroup->getTerrainSize();
Ogre::Vector3 terrainScale(4096 / (page_size-1), 600, 4096 / (page_size-1));
float *heights = new float[page_size*page_size];
for(unsigned y = 0; y < page_size; ++y)
{
for(unsigned x = 0; x < page_size; ++x)
{
Ogre::ColourValue color = imgPtr->getColourAt(x, y, 0);
heights[x + y * page_size] = color.r;
}
}
OgreBulletCollisions::HeightmapCollisionShape *terrainShape = new OgreBulletCollisions::HeightmapCollisionShape(
page_size,
page_size,
terrainScale,
heights,
true
);
OgreBulletDynamics::RigidBody *terrainBody = new OgreBulletDynamics::RigidBody(
"Terrain",
OgreInit::level->physicsManager->getWorld()
);
imgPtr = NULL;
Ogre::Vector3 terrainShiftPos(terrainScale.x/(page_size-1), 0, terrainScale.z/(page_size-1));
terrainShiftPos.y = terrainScale.y / 2 * terrainScale.y;
Ogre::SceneNode *pTerrainNode = OgreInit::sceneManager->getRootSceneNode()->createChildSceneNode();
terrainBody->setStaticShape(pTerrainNode, terrainShape, 0.0f, 0.8f, terrainShiftPos);
//terrainBody->setPosition(terrainBody->getWorldPosition()-Ogre::Vector3(0.005, 0, 0.005));
OgreInit::level->physicsManager->addBody(terrainBody);
OgreInit::level->physicsManager->addShape(terrainShape);
}
This is what it looks like with the debug drawer turned on:
My world is 4096*600*4096 in size, and each chunk is 64*600*64
heights[x + y * page_size] = color.r;
This Line gives you negative values. If you combine negative terrain height values with ogre bullet terrain, you get a wrong bounding box conversation.
You need to use the intervall 0-1 for height values.
Had the same problem with perlin noise filter that gives you values from -1 to 1.

A method for indexing triangles from a loaded heightmap?

I am currently making a method to load in a noisy heightmap, but lack the triangles to do so. I want to make an algorithm that will take an image, its width and height and construct a terrain node out of it.
Here's what I have so far, in somewhat pseudo
Vertex* vertices = new Vertices[image.width * image.height];
Index* indices; // How do I judge how many indices I will have?
float scaleX = 1 / image.width;
float scaleY = 1 / image.height;
float currentYScale = 0;
for(int y = 0; y < image.height; ++y) {
float currentXScale = 0;
for (int x = 0; x < image.width; ++x) {
Vertex* v = vertices[x * y];
v.x = currentXScale;
v.y = currentYScale;
v.z = image[x,y];
currentXScale += scaleX;
}
currentYScale += scaleY;
}
This works well enough to my needs, my only problem is this: How would I calculate the # of indices and their positions for drawing the triangles? I have somewhat familiarity with indices, but not how to programmatically calculate them, I can only do that statically.
As far as your code above goes, using vertices[x * y] isn't right - if you use that, then e.g. vert(2,3) == vert(3,2). What you want is something like vertices[y * image.width + x], but you can do it more efficiently by incrementing a counter (see below).
Here's the equivalent code I use. It's in C# unfortunately, but hopefully it should illustrate the point:
/// <summary>
/// Constructs the vertex and index buffers for the terrain (for use when rendering the terrain).
/// </summary>
private void ConstructBuffers()
{
int heightmapHeight = Heightmap.GetLength(0);
int heightmapWidth = Heightmap.GetLength(1);
int gridHeight = heightmapHeight - 1;
int gridWidth = heightmapWidth - 1;
// Construct the individual vertices for the terrain.
var vertices = new VertexPositionTexture[heightmapHeight * heightmapWidth];
int vertIndex = 0;
for(int y = 0; y < heightmapHeight; ++y)
{
for(int x = 0; x < heightmapWidth; ++x)
{
var position = new Vector3(x, y, Heightmap[y,x]);
var texCoords = new Vector2(x * 2f / heightmapWidth, y * 2f / heightmapHeight);
vertices[vertIndex++] = new VertexPositionTexture(position, texCoords);
}
}
// Create the vertex buffer and fill it with the constructed vertices.
this.VertexBuffer = new VertexBuffer(Renderer.GraphicsDevice, typeof(VertexPositionTexture), vertices.Length, BufferUsage.WriteOnly);
this.VertexBuffer.SetData(vertices);
// Construct the index array.
var indices = new short[gridHeight * gridWidth * 6]; // 2 triangles per grid square x 3 vertices per triangle
int indicesIndex = 0;
for(int y = 0; y < gridHeight; ++y)
{
for(int x = 0; x < gridWidth; ++x)
{
int start = y * heightmapWidth + x;
indices[indicesIndex++] = (short)start;
indices[indicesIndex++] = (short)(start + 1);
indices[indicesIndex++] = (short)(start + heightmapWidth);
indices[indicesIndex++] = (short)(start + 1);
indices[indicesIndex++] = (short)(start + 1 + heightmapWidth);
indices[indicesIndex++] = (short)(start + heightmapWidth);
}
}
// Create the index buffer.
this.IndexBuffer = new IndexBuffer(Renderer.GraphicsDevice, typeof(short), indices.Length, BufferUsage.WriteOnly);
this.IndexBuffer.SetData(indices);
}
I guess the key point is that given a heightmap of size heightmapHeight * heightmapWidth, you need (heightmapHeight - 1) * (heightmapWidth - 1) * 6 indices, since you're drawing:
2 triangles per grid square
3 vertices per triangle
(heightmapHeight - 1) * (heightmapWidth - 1) grid squares in your terrain.