UV Mapping issue artifact on Sphere OpenGl - c++

I am UV mapping a 2D Texture on a 3d sphere X, Y, Z coordinates, by using the formula
u = (0.5 + atan2(X, Y) / (2 * glm::pi<double>()));
v = (0.5 - asin(Z) / glm::pi<double>());
in modern openGL C++.
I dont know why there is this artifact in the sphere. Cant figure it out.

Ok, I have figured and corrected this out, thought I will answer here finally now.
Big thanks to BDL and Rabbid76.
Whenever u == 0, I added the same vertex position (X Y Z) to the vertices vector (or array) and also increased the index, but hardcoding the texture u to be 1.0f this time.
No issues now, the seam looks perfect now.

This is detail of a textured sphere geometry which is indexed. You should use index for better performance:
m_meridians and m_latitudes are detail level of sphere.
for (size_t i = 0; i < m_meridians + 1; i++)
{
for (size_t j = 0; j < m_latitudes + 2; j++)
{
// texCoord in the range [(0,0), (1,1)]
QVector2D texCoord((float)i / m_meridians, (float)j / (m_latitudes+1));
// theta = longitude from 0 to 2pi
// phi = latitude from -pi/2 to pi/2
double theta, phi;
theta = 2*M_PI * texCoord.x();
phi = M_PI * texCoord.y() - M_PI_2;
QVector3D pos;
pos.setY((float)std::sin(phi));
pos.setX((float)std::cos(phi) * std::cos(theta));
pos.setZ((float)std::cos(phi) * std::sin(theta));
m_vertices.push_back({pos, texCoord});
}
}
// Calculate triangle indices
for (size_t i = 0; i < m_meridians; i++)
{
// Construct triangles between successive meridians
for (size_t j = 0; j < m_latitudes + 1; j++)
{
m_indices.push_back(i * (m_latitudes+2) + j);
m_indices.push_back(i * (m_latitudes+2) + j+1);
m_indices.push_back((i+1) * (m_latitudes+2) + j+1);
m_triangleCount++;
m_indices.push_back((i+1) * (m_latitudes+2) + j+1);
m_indices.push_back((i+1) * (m_latitudes+2) + j);
m_indices.push_back(i * (m_latitudes+2) + j);
m_triangleCount++;
}
}

Related

Adding Ozone to my sky simulation

I implemented a simulation for the colour of the sky a while ago by following the scratch a pixel tutorial: https://www.scratchapixel.com/lessons/procedural-generation-virtual-worlds/simulating-sky
I adapted it for the actual sun position and am able to get realistic sky colours during the day. However, I noticed that after sunset/ before sunrise, the colours are greyish when they should be deep blue. After researching about this, I read that this is due to the ozone absorption not being present in my model.
I used extinction coefficients : (3.426,8.298,0.356) * 0.06e-5 -> found on https://media.contentapi.ea.com/content/dam/eacom/frostbite/files/s2016-pbs-frostbite-sky-clouds-new.pdf
and also read that since the ozone does not scatter, it should only be added to the transmittance value.
Equation
Therefore, I modified the code from scratchapixel as follows:
for (uint32_t i = 0; i < numSamples; ++i) {
vec3 samplePosition = ray_in2.origin() + (tCurrent +
segmentLength * 0.5f) * ray_in2.direction();
float height = samplePosition.length() - atmosphere.earthRadius;
// compute optical depth for light
float hr = exp(-height / atmosphere.Hr) * segmentLength;
float hm = exp(-height / atmosphere.Hm) * segmentLength;
float ho = exp(-height / atmosphere.Hr)* segmentLength*(6e-7);
opticalDepthR += hr;
opticalDepthM += hm;
opticalDepthO += ho;
// light optical depth
float t0Light, t1Light;
...
for (j = 0; j < numSamplesLight; ++j) {
vec3 samplePositionLight = samplePosition + (tCurrentLight +
segmentLengthLight * 0.5f) * sunDir;
float heightLight = samplePositionLight.length() -
atmosphere.earthRadius;
if (heightLight < 0) break;
opticalDepthLightR += exp(-heightLight / atmosphere.Hr) *
segmentLengthLight;
opticalDepthLightM += exp(-heightLight / atmosphere.Hm) *
segmentLengthLight;
opticalDepthLightO += exp(-heightLight / atmosphere.Hr) *
segmentLengthLight*(6e-7); ;
tCurrentLight += segmentLengthLight;
}
if (j == numSamplesLight) {
vec3 tau = (betaR) * (opticalDepthR + opticalDepthLightR) +
betaM * 1.1f * (opticalDepthM + opticalDepthLightM)+ betaO*
(opticalDepthO + opticalDepthLightO);
vec3 attenuation(exp(-tau.x()), exp(-tau.y()), exp(-
tau.z()));
Summary:
I added variable opticalDepthO and opticalDepthLightO which
are calculated same as the optical depth for Rayleigh, but multiplied
by 6e-7.
Then, the sum of opticalDepthLightO and opticalDepthO is multiplied by the extiction coefficient for ozone and added to variable tau.
Problem is, I see no difference in my sky colour before and after
adding ozone. Can someone guide me to what it is that I'm doing wrong?

DirectX/C++: Marching Cubes Indexing

I've implemented the Marching Cube algorithm in a DirectX environment (To test and have fun). Upon completion, I noticed that the resulting model looks heavily distorted, as if the indices were off.
I've attempted to extract the indices, but I think the vertices are ordered correctly already, using the lookup tables, examples at http://paulbourke.net/geometry/polygonise/ . The current build uses a 15^3 volume.
Marching cubes iterates over the array as normal:
for (float iX = 0; iX < CellFieldSize.x; iX++){
for (float iY = 0; iY < CellFieldSize.y; iY++){
for (float iZ = 0; iZ < CellFieldSize.z; iZ++){
MarchCubes(XMFLOAT3(iX*StepSize, iY*StepSize, iZ*StepSize), StepSize);
}
}
}
The MarchCube function is called:
void MC::MarchCubes(){
...
int Corner, Vertex, VertexTest, Edge, Triangle, FlagIndex, EdgeFlags;
float Offset;
XMFLOAT3 Color;
float CubeValue[8];
XMFLOAT3 EdgeVertex[12];
XMFLOAT3 EdgeNorm[12];
//Local copy
for (Vertex = 0; Vertex < 8; Vertex++) {
CubeValue[Vertex] = (this->*fSample)(
in_Position.x + VertexOffset[Vertex][0] * Scale,
in_Position.y + VertexOffset[Vertex][1] * Scale,
in_Position.z + VertexOffset[Vertex][2] * Scale
);
}
FlagIndex = 0;
Intersection calculations:
...
//Test vertices for intersection.
for (VertexTest = 0; VertexTest < 8; VertexTest++){
if (CubeValue[VertexTest] <= TargetValue)
FlagIndex |= 1 << VertexTest;
}
//Find which edges are intersected by the surface.
EdgeFlags = CubeEdgeFlags[FlagIndex];
if (EdgeFlags == 0){
return;
}
for (Edge = 0; Edge < 12; Edge++){
if (EdgeFlags & (1 << Edge)) {
Offset = GetOffset(CubeValue[EdgeConnection[Edge][0]], CubeValue[EdgeConnection[Edge][1]], TargetValue); // Get offset function definition. Needed!
EdgeVertex[Edge].x = in_Position.x + VertexOffset[EdgeConnection[Edge][0]][0] + Offset * EdgeDirection[Edge][0] * Scale;
EdgeVertex[Edge].y = in_Position.y + VertexOffset[EdgeConnection[Edge][0]][1] + Offset * EdgeDirection[Edge][1] * Scale;
EdgeVertex[Edge].z = in_Position.z + VertexOffset[EdgeConnection[Edge][0]][2] + Offset * EdgeDirection[Edge][2] * Scale;
GetNormal(EdgeNorm[Edge], EdgeVertex[Edge].x, EdgeVertex[Edge].y, EdgeVertex[Edge].z); //Need normal values
}
}
And the original implementation gets pushed into a holding struct for DirectX.
for (Triangle = 0; Triangle < 5; Triangle++) {
if (TriangleConnectionTable[FlagIndex][3 * Triangle] < 0) break;
for (Corner = 0; Corner < 3; Corner++) {
Vertex = TriangleConnectionTable[FlagIndex][3 * Triangle + Corner];3 * Triangle + Corner]);
GetColor(Color, EdgeVertex[Vertex], EdgeNorm[Vertex]);
Data.VertexData.push_back(XMFLOAT3(EdgeVertex[Vertex].x, EdgeVertex[Vertex].y, EdgeVertex[Vertex].z));
Data.NormalData.push_back(XMFLOAT3(EdgeNorm[Vertex].x, EdgeNorm[Vertex].y, EdgeNorm[Vertex].z));
Data.ColorData.push_back(XMFLOAT4(Color.x, Color.y, Color.z, 1.0f));
}
}
(This is the same ordering as the original GL implementation)
Turns out, I missed a parenthesis showing operator precedence.
EdgeVertex[Edge].x = in_Position.x + (VertexOffset[EdgeConnection[Edge][0]][0] + Offset * EdgeDirection[Edge][0]) * Scale;
EdgeVertex[Edge].y = in_Position.y + (VertexOffset[EdgeConnection[Edge][0]][1] + Offset * EdgeDirection[Edge][1]) * Scale;
EdgeVertex[Edge].z = in_Position.z + (VertexOffset[EdgeConnection[Edge][0]][2] + Offset * EdgeDirection[Edge][2]) * Scale;
Corrected, obtained Visine; resumed fun.

Creating a Sphere (using osg::Geometry) in OpenSceneGraph

I spent quite some time to get this working, but my Sphere just won't display.
Used the following code to make my function:
Creating a 3D sphere in Opengl using Visual C++
And the rest is simple OSG with osg::Geometry.
(Note: Not ShapeDrawable, as you can't implement custom shapes using that.)
Added the vertices, normals, texcoords into VecArrays.
For one, I suspect something misbehaving, as my saved object is half empty.
Is there a way to convert the existing description into OSG?
Reason? I want to understand how to create objects later on.
Indeed, it is linked with a later assignment, but currently I'm just prepairing beforehand.
Sidenote: Since I have to make it without indices, I left them out.
But my cylinder displays just fine without them.
Caveat: I'm not an OSG expert. But, I did do some research.
OSG requires all of the faces to be defined in counter-clockwise order, so that backface culling can reject faces that are "facing away". The code you're using to generate the sphere does not generate all the faces in counter-clockwise order.
You can approach this a couple ways:
Adjust how the code generates the faces, by inserting the faces CCW order.
Double up your model and insert each face twice, once with the vertices on each face in their current order and once with the vertices in reverse order.
Option 1 above will limit your total polygon count to what's needed. Option 2 will give you a sphere that's visible from outside the sphere as well as within.
To implement Option 2, you merely need to modify this loop from the code you linked to:
indices.resize(rings * sectors * 4);
std::vector<GLushort>::iterator i = indices.begin();
for(r = 0; r < rings-1; r++)
for(s = 0; s < sectors-1; s++) {
*i++ = r * sectors + s;
*i++ = r * sectors + (s+1);
*i++ = (r+1) * sectors + (s+1);
*i++ = (r+1) * sectors + s;
}
Double up the set of quads like so:
indices.resize(rings * sectors * 8);
std::vector<GLushort>::iterator i = indices.begin();
for(r = 0; r < rings-1; r++)
for(s = 0; s < sectors-1; s++) {
*i++ = r * sectors + s;
*i++ = r * sectors + (s+1);
*i++ = (r+1) * sectors + (s+1);
*i++ = (r+1) * sectors + s;
*i++ = (r+1) * sectors + s;
*i++ = (r+1) * sectors + (s+1);
*i++ = r * sectors + (s+1);
*i++ = r * sectors + s;
}
That really is the "bigger hammer" solution, though.
Personally, I'm having a hard time figuring out why the original loop isn't sufficient; intuiting my way through the geometry, it feels like it's already generating CCW faces, because each successive ring is above the previous, and each successive sector is CCW around the surface of the sphere from the previous. So, the original order itself should be CCW with respect to the face nearest the viewer.
EDIT Using the OpenGL code you linked before and the OSG tutorial you linked today, I put together what I think is a correct program to generate the osg::Geometry / osg::Geode for the sphere. I have no way to test the following code, but desk-checking it, it looks correct or at least largely correct.
#include <vector>
class SolidSphere
{
protected:
osg::Geode sphereGeode;
osg::Geometry sphereGeometry;
osg::Vec3Array sphereVertices;
osg::Vec3Array sphereNormals;
osg::Vec2Array sphereTexCoords;
std::vector<osg::DrawElementsUInt> spherePrimitiveSets;
public:
SolidSphere(float radius, unsigned int rings, unsigned int sectors)
{
float const R = 1./(float)(rings-1);
float const S = 1./(float)(sectors-1);
int r, s;
sphereGeode.addDrawable( &sphereGeometry );
// Establish texture coordinates, vertex list, and normals
for(r = 0; r < rings; r++)
for(s = 0; s < sectors; s++)
{
float const y = sin( -M_PI_2 + M_PI * r * R );
float const x = cos(2*M_PI * s * S) * sin( M_PI * r * R );
float const z = sin(2*M_PI * s * S) * sin( M_PI * r * R );
sphereTexCoords.push_back( osg::Vec2(s*R, r*R) );
sphereVertices.push_back ( osg::Vec3(x * radius,
y * radius,
z * radius) );
sphereNormals.push_back ( osg::Vec3(x, y, z) );
}
sphereGeometry.setVertexArray ( &spehreVertices );
sphereGeometry.setTexCoordArray( &sphereTexCoords );
// Generate quads for each face.
for(r = 0; r < rings-1; r++)
for(s = 0; s < sectors-1; s++)
{
spherePrimitiveSets.push_back(
DrawElementUint( osg::PrimitiveSet::QUADS, 0 )
);
osg::DrawElementsUInt& face = spherePrimitiveSets.back();
// Corners of quads should be in CCW order.
face.push_back( (r + 0) * sectors + (s + 0) );
face.push_back( (r + 0) * sectors + (s + 1) );
face.push_back( (r + 1) * sectors + (s + 1) );
face.push_back( (r + 1) * sectors + (s + 0) );
sphereGeometry.addPrimitveSet( &face );
}
}
osg::Geode *getGeode() const { return &sphereGeode; }
osg::Geometry *getGeometry() const { return &sphereGeometry; }
osg::Vec3Array *getVertices() const { return &sphereVertices; }
osg::Vec3Array *getNormals() const { return &sphereNormals; }
osg::Vec2Array *getTexCoords() const { return &sphereTexCoords; }
};
You can use the getXXX methods to get the various pieces. I didn't see how to hook the surface normals to anything, but I do store them in a Vec2Array. If you have a use for them, they're computed and stored and waiting to be hooked to something.
That code calls glutSolidSphere() to draw a sphere, but it doesn't make sense to call it if your application is not using GLUT to display a window with 3D context.
There is another way to draw a sphere easily, which is by invoking gluSphere() (you probably have GLU installed):
void gluSphere(GLUquadric* quad,
GLdouble radius,
GLint slices,
GLint stacks);
Parameters
quad - Specifies the quadrics object (created with gluNewQuadric).
radius - Specifies the radius of the sphere.
slices - Specifies the number of subdivisions around the z axis (similar
to lines of longitude).
stacks - Specifies the number of subdivisions along the z axis (similar
to lines of latitude).
Usage:
// If you also need to include glew.h, do it before glu.h
#include <glu.h>
GLUquadric* _quadratic = gluNewQuadric();
if (_quadratic == NULL)
{
std::cerr << "!!! Failed gluNewQuadric" << std::endl;
return;
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0, 0.0, -5.0);
glColor3ub(255, 97, 3);
gluSphere(_quadratic, 1.4f, 64, 64);
glFlush();
gluDeleteQuadric(_quadratic);
It's probably wiser to move the gluNewQuadric() call to the constructor of your class since it needs to be allocated only once, and move the call to gluDeleteQuadric() to the destructor of the class.
#JoeZ's answer is excellent, but the OSG code has some errors/bad practices. Here's the updated code. It's been tested and it shows a very nice sphere.
osg::ref_ptr<osg::Geode> buildSphere( const double radius,
const unsigned int rings,
const unsigned int sectors )
{
osg::ref_ptr<osg::Geode> sphereGeode = new osg::Geode;
osg::ref_ptr<osg::Geometry> sphereGeometry = new osg::Geometry;
osg::ref_ptr<osg::Vec3Array> sphereVertices = new osg::Vec3Array;
osg::ref_ptr<osg::Vec3Array> sphereNormals = new osg::Vec3Array;
osg::ref_ptr<osg::Vec2Array> sphereTexCoords = new osg::Vec2Array;
float const R = 1. / static_cast<float>( rings - 1 );
float const S = 1. / static_cast<float>( sectors - 1 );
sphereGeode->addDrawable( sphereGeometry );
// Establish texture coordinates, vertex list, and normals
for( unsigned int r( 0 ); r < rings; ++r ) {
for( unsigned int s( 0) ; s < sectors; ++s ) {
float const y = sin( -M_PI_2 + M_PI * r * R );
float const x = cos( 2 * M_PI * s * S) * sin( M_PI * r * R );
float const z = sin( 2 * M_PI * s * S) * sin( M_PI * r * R );
sphereTexCoords->push_back( osg::Vec2( s * R, r * R ) );
sphereVertices->push_back ( osg::Vec3( x * radius,
y * radius,
z * radius) )
;
sphereNormals->push_back ( osg::Vec3( x, y, z ) );
}
}
sphereGeometry->setVertexArray ( sphereVertices );
sphereGeometry->setTexCoordArray( 0, sphereTexCoords );
// Generate quads for each face.
for( unsigned int r( 0 ); r < rings - 1; ++r ) {
for( unsigned int s( 0 ); s < sectors - 1; ++s ) {
osg::ref_ptr<osg::DrawElementsUInt> face =
new osg::DrawElementsUInt( osg::PrimitiveSet::QUADS,
4 )
;
// Corners of quads should be in CCW order.
face->push_back( ( r + 0 ) * sectors + ( s + 0 ) );
face->push_back( ( r + 0 ) * sectors + ( s + 1 ) );
face->push_back( ( r + 1 ) * sectors + ( s + 1 ) );
face->push_back( ( r + 1 ) * sectors + ( s + 0 ) );
sphereGeometry->addPrimitiveSet( face );
}
}
return sphereGeode;
}
Changes:
The OSG elements used in the code now are smart pointers1. Moreover, classes like Geode and Geometry have their destructors protected, so the only way to instantiate them are via dynamic allocation.
Removed spherePrimitiveSets as it isn't needed in the current version of the code.
I put the code in a free function, as I don't need a Sphere class in my code. I omitted the getters and the protected attributes. They aren't needed: if you need to access, say, the geometry, you can get it via: sphereGeode->getDrawable(...). The same goes for the rest of the attributes.
[1] See Rule of thumb #1 here. It's a bit old but the advice maintains.

Using glColorPointer with glDrawElements results in nothing being drawn

I'm working on just making uniformly colors spheres for a project and I'm running into an issue. The spheres run fine but when I try to color them with glColorPointer they stop appearing. OpenGL isn't showing any errors when I call glGetError so I'm at a loss for why this would happen.
The code to generate the vertices, colors etc:
void SphereObject::setupVertices()
{
//determine the array sizes
//vertices per row (+1 for the repeated one at the end) * three for each coordinate
//times the number of rows
int arraySize = myNumVertices * 3;
myNumIndices = (myVerticesPerRow + 1) * myRows * 2;
myVertices = new GLdouble[arraySize];
myIndices = new GLuint[myNumIndices];
myNormals = new GLdouble[arraySize];
myColors = new GLint[myNumVertices * 4];
//use spherical coordinates to calculate the vertices
double phiIncrement = 360 / myVerticesPerRow;
double thetaIncrement = 180 / (double)myRows;
int arrayIndex = 0;
int colorArrayIndex = 0;
int indicesIndex = 0;
double x, y, z = 0;
for(double theta = 0; theta <= 180; theta += thetaIncrement)
{
//loop including the repeat for the last vertex
for(double phi = 0; phi <= 360; phi += phiIncrement)
{
//make sure that the last vertex is repeated
if(360 - phi < phiIncrement)
{
x = myRadius * sin(radians(theta)) * cos(radians(0));
y = myRadius * sin(radians(theta)) * sin(radians(0));
z = myRadius * cos(radians(theta));
}
else
{
x = myRadius * sin(radians(theta)) * cos(radians(phi));
y = myRadius * sin(radians(theta)) * sin(radians(phi));
z = myRadius * cos(radians(theta));
}
myColors[colorArrayIndex] = myColor.getX();
myColors[colorArrayIndex + 1] = myColor.getY();
myColors[colorArrayIndex + 2] = myColor.getZ();
myColors[colorArrayIndex + 3] = 1;
myVertices[arrayIndex] = x;
myVertices[arrayIndex + 1] = y;
myVertices[arrayIndex + 2] = z;
if(theta <= 180 - thetaIncrement)
{
myIndices[indicesIndex] = arrayIndex / 3;
myIndices[indicesIndex + 1] = (arrayIndex / 3) + myVerticesPerRow + 1;
indicesIndex += 2;
}
arrayIndex += 3;
colorArrayIndex += 4;
}
}
}
And the code to actually render the thing
void SphereObject::render()
{
glPushMatrix();
glPushClientAttrib(GL_CLIENT_VERTEX_ARRAY_BIT);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_INT, 0, myColors);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_DOUBLE, 0, myVertices);
glDrawElements(GL_QUAD_STRIP, myNumIndices, GL_UNSIGNED_INT, myIndices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glPopClientAttrib();
glPopMatrix();
}
Any and all help would be appreciated. I'm really having a hard time for some reason.
When you use GL_INT (or any integer type) for color pointer, it linearly maps the largest possible integer value to 1.0f (maximum color), and 0 to 0.0f (minimum color).
Therefore unless your values of RGB and A are in the billions, they will likely appear completely black (or transparent if that's enabled). I see that you've got alpha = 1, which will essentially be zero after conversion to float.

A method for indexing triangles from a loaded heightmap?

I am currently making a method to load in a noisy heightmap, but lack the triangles to do so. I want to make an algorithm that will take an image, its width and height and construct a terrain node out of it.
Here's what I have so far, in somewhat pseudo
Vertex* vertices = new Vertices[image.width * image.height];
Index* indices; // How do I judge how many indices I will have?
float scaleX = 1 / image.width;
float scaleY = 1 / image.height;
float currentYScale = 0;
for(int y = 0; y < image.height; ++y) {
float currentXScale = 0;
for (int x = 0; x < image.width; ++x) {
Vertex* v = vertices[x * y];
v.x = currentXScale;
v.y = currentYScale;
v.z = image[x,y];
currentXScale += scaleX;
}
currentYScale += scaleY;
}
This works well enough to my needs, my only problem is this: How would I calculate the # of indices and their positions for drawing the triangles? I have somewhat familiarity with indices, but not how to programmatically calculate them, I can only do that statically.
As far as your code above goes, using vertices[x * y] isn't right - if you use that, then e.g. vert(2,3) == vert(3,2). What you want is something like vertices[y * image.width + x], but you can do it more efficiently by incrementing a counter (see below).
Here's the equivalent code I use. It's in C# unfortunately, but hopefully it should illustrate the point:
/// <summary>
/// Constructs the vertex and index buffers for the terrain (for use when rendering the terrain).
/// </summary>
private void ConstructBuffers()
{
int heightmapHeight = Heightmap.GetLength(0);
int heightmapWidth = Heightmap.GetLength(1);
int gridHeight = heightmapHeight - 1;
int gridWidth = heightmapWidth - 1;
// Construct the individual vertices for the terrain.
var vertices = new VertexPositionTexture[heightmapHeight * heightmapWidth];
int vertIndex = 0;
for(int y = 0; y < heightmapHeight; ++y)
{
for(int x = 0; x < heightmapWidth; ++x)
{
var position = new Vector3(x, y, Heightmap[y,x]);
var texCoords = new Vector2(x * 2f / heightmapWidth, y * 2f / heightmapHeight);
vertices[vertIndex++] = new VertexPositionTexture(position, texCoords);
}
}
// Create the vertex buffer and fill it with the constructed vertices.
this.VertexBuffer = new VertexBuffer(Renderer.GraphicsDevice, typeof(VertexPositionTexture), vertices.Length, BufferUsage.WriteOnly);
this.VertexBuffer.SetData(vertices);
// Construct the index array.
var indices = new short[gridHeight * gridWidth * 6]; // 2 triangles per grid square x 3 vertices per triangle
int indicesIndex = 0;
for(int y = 0; y < gridHeight; ++y)
{
for(int x = 0; x < gridWidth; ++x)
{
int start = y * heightmapWidth + x;
indices[indicesIndex++] = (short)start;
indices[indicesIndex++] = (short)(start + 1);
indices[indicesIndex++] = (short)(start + heightmapWidth);
indices[indicesIndex++] = (short)(start + 1);
indices[indicesIndex++] = (short)(start + 1 + heightmapWidth);
indices[indicesIndex++] = (short)(start + heightmapWidth);
}
}
// Create the index buffer.
this.IndexBuffer = new IndexBuffer(Renderer.GraphicsDevice, typeof(short), indices.Length, BufferUsage.WriteOnly);
this.IndexBuffer.SetData(indices);
}
I guess the key point is that given a heightmap of size heightmapHeight * heightmapWidth, you need (heightmapHeight - 1) * (heightmapWidth - 1) * 6 indices, since you're drawing:
2 triangles per grid square
3 vertices per triangle
(heightmapHeight - 1) * (heightmapWidth - 1) grid squares in your terrain.