Directx Vertex Color - c++

code is here as requested:
void MakeTeapotRed()
{
D3DXCreateTeapot(Device, &Teapot, 0);
}
so how do I change the vertex color of the teapot? If your thinking material, i already know that, I just need to know the color vertex which is supposed to be a much simpler thing than material. I can do this with a geometry mannually layed out with Vertex Buffers and Index Buffers, how do you apply this to a mesh with those VB and IB info filled out already?
class ColorVertex
{
public:
ColorVertex(){}
ColorVertex(float x, float y, float z, D3DCOLOR color)
{
m_x = x;
m_y = y;
m_z = z;
m_color = color;
}
float m_x, m_y, m_z; // 3d coordinates
D3DCOLOR m_color;
static const DWORD FVF;
};
const DWORD ColorVertex::FVF = D3DFVF_XYZ | D3DFVF_DIFFUSE;
The code I just posted is the class for the Vertex information called ColorVertex. As you can see, the code is setup for vertex color, color that doesnt required or must NOT have a light to work properly, as shown in FVF = D3DFVF_XYZ | D3DFVF_DIFFUSE.
Again, people seems to have a hard time understanding the problem, I need to update the color of the vertex to include color, for objects like teapot, sphere, mesh that can be created through D3DCreate[objects] eg. D3DCreateTeapot(arguments stuff).
Pls layout the code line by line, I'm a noob in directx, not in c++.

Look at the section on accessing the vertex buffer. You have to get the vertex declaration end examine it to find how the data for each vertex is laid out.
Once you have identified how the colour is stored you loop through each vertex and changed the value. When you finish and unlock the vertex buffer of the mesh, you will be done.
I just need to know the color vertex which is supposed to be a much simpler thing than material
I would have to disagree, a material looks like it would be a lot easier.

Related

Switching from 3D to 2D in OpenGL

I'm making a weather simulation in Opengl 4.0 and am trying to create the sky by creating a fullscreen quad in the background. I'm trying to do that by having the vertex shader generate four vertexes and then drawing a triangle strip. Everything compiles just fine and I can see all the other objects I've made before, but the sky is nowhere to be seen. What am I doing wrong?
main.cpp
GLint stage = glGetUniformLocation(myShader.Program, "stage");
//...
glBindVertexArray(FS); //has four coordinates (-1,-1,1) to (1,1,1) in buffer object
glUniform1i(stage, 1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArray(0);
vertex shader
uniform int stage;
void main()
{
if (stage==1)
{
gl_Position = vec4(position, 1.0f);
}
else
{
//...
}
}
fragment shader
uniform int stage;
void main()
{
if (stage==1)
{ //placeholder gray colour so I can see the sky
color = vec4(0.5f, 0.5f, 0.5f, 1.0f);
}
else
{
//...
}
}
I should also mention that I'm a beginner in OpenGL and that it really has to be in OpenGL 4.0 or later.
EDIT:
I've figured out where's the problem, but still don't know how to fix it. The square exists, but only displays if I multiply it with the view and projection matrix (but then it doesn't stay glued to the screen and just rotates along with the rest of the scene, which I do not want). Essentially, I somehow need to switch back to 2D or to screen space or however it's called, draw the square, and switch back to 3D so that all the other objects work fine. How?
The issue was with putting a 1 as the z coord – putting 0.999f instead solved the issue.

Applying animations to objects in openGL

I have stumbled upon a problem here while writing a program in which I am animating shapes using openGL.
Currently in the program, I am creating some shapes, with the following snippet
for(int i=50;i<=150;i=i+50){
for(int j=50;j<=750;j=j+200){
//Draw rectangle shape at position(j,i); //shape has additional capability for animations }
}
which gives me this output:
Now, I have to resize these rectangles and move them all to another position. I have the final target Point for the first rectangle rectangle at position[0][0] where it should be moved. However, when I animate the size of these rectangles with something like
rectangle.resize(newWidth, newHeight, animationTime);
the rectangle for obvious reasons do not stick together, and I get something like:
I am looking for something like Grouping which can bind these shapes together, so that even when different animations like resize (and motion etc.) are applied, the vertices or the boundaries should be touching together.
Note that Grouping is the main thing here. I might have a requirement in the future in which I would have to group the two rectangles in the last column, where independent animations (like rotations) already happening on them. So, I picture this something like a plane/container having these two rectangle and that plane/container itself can be animated for position etc. I am fine with algorithm/concept and not the code.
Instead of animating the geometry on the CPU, animate scale/position matrices on the CPU and leave the transformation of the geometry to the vertex shader via the MVP matrix. Use one and the same scale matrix for all the rectangles. (Or two matrices, if your scale factor is different in X and Y).
PS. Here's an example:
float sc = 0;
void init()
{
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
}
void on_each_frame()
{
// do other things
// draw pulsating rectangles
sc += 0.02;
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glScalef((float)sin(sc) + 1.5f);
// draw rectangles as usual, **without** scaling them
glPopMatrix();
// do other things
}
Think about implementing a 'DrawableAnimatableObject' which is a high level 3D object that is able to animate and draw itself, and contains your polygons (multiple rectangles in your case) as internal data. See the following incomplete code to give you an idea:
class DrawableAnimatableObject {
private:
Mesh *mesh;
Vector3 position;
Quaternion orientation;
Vector3 scale;
Matrix transform;
public:
DrawableAnimatableObject();
~DrawableAnimatableObject();
//update the object properties for the next frame.
//it updates the scale, position or orientation of your
//object to suit your animation.
void update();
//Draw the object.
//This function converts scale, orientation and position
//information into proper OpenGL matrices and passes them
//to the shaders prior to drawing the polygons,
//therefore no need to resize the polygons individually.
void draw();
//Standard set-get;
void setPosition(Vector3 p);
Vector3 getPosition();
void setOrientation(Quaternion q);
Quaternion getOrientation();
void setScale(float f);
Vector3 getScale();
};
In this code, Mesh is a data structure that contains your polygons. Simply put, it can be a vertex-face list, or a more complicated structure like half-edge. The DrawableAnimatableObject::draw() function should look something like this:
DrawableAnimatableObject::draw() {
transform = Matrix::CreateTranslation(position) * Matrix::CreateFromQuaternion(orientation) * Matrix::CreateScale(scale);
// in modern openGL this matrix should be passed to shaders.
// in legacy OpenGL you will apply this matrix with:
glPushMatrix();
glMultMatrixf(transform);
glBegin(GL_QUADS);
//...
// Draw your rectangles here.
//...
glEnd();
glPopMatrix();
}

OpenGL arrangement of normal data in VBOs

When reading examples of simple VBOs programs I've noticed there seems to be an association of normal data with vertex data. But from the definition of a normal, I would have thought that the normal data should be associated with the face data.
From the code segment below I can noticed that the normal data for each MyVertex is the same, so the normal for the "triangle face" would make sense. But I am unsure of how one would store the normal data for larger objects where several faces may share the same vertices as stored in GL_ELEMENT_ARRAY_BUFFER.
Questions:
How does OpenGL conceptually handle the normal data? Or have I made a wrong assumption in how normals should work somewhere?
(code below from http://www.opengl.org/wiki/VBO_-_just_examples)
struct MyVertex
{
float x, y, z; //Vertex
float nx, ny, nz; //Normal
float s0, t0; //Texcoord0
};
MyVertex pvertex[3];
//VERTEX 0
pvertex[0].x = 0.0;
pvertex[0].y = 0.0;
pvertex[0].z = 0.0;
pvertex[0].nx = 0.0;
pvertex[0].ny = 0.0;
pvertex[0].nz = 1.0;
pvertex[0].s0 = 0.0;
pvertex[0].t0 = 0.0;
//VERTEX 1
Thanks in advance
In OpenGL, normals are vector attributes, just like position or texture coordinates.
Having per-face normals may seem reasonable, but wouldn't work in practice.
Reason: One triangle is physically flat, but is often an approximation of a curved surface. Having normal vectors different among the vertices of a triangle allows you to interpolate between them to get an approximated normal vector at any point of the surface.
Think of a vertex normal as a sample of the normal at some particular points of a smooth surface.
(Of course, when rendering surfaces with hard edges, like a cube, the above doesn't really help and many require you to have duplicate vertices differing only by the normal.)

DirectX Clip space texture coordinates

Okay first up I am using:
DirectX 10
C++
Okay this is a bit of a bizarre one to me, I wouldn't usually ask the question, but I've been forced by circumstance. I have two triangles (not a quad for reasons I wont go into!) full screen, aligned to screen through the fact they are not transformed.
In the DirectX vertex declaration I am passing a 3 component float (Pos x,y,z) and 2 component float (Texcoord x,y). Texcoord z is reserved for texture2d arrays, which I'm currently defaulting to 0 in the the pixel shader.
I wrote this to achieve the simple task:
float fStartX = -1.0f;
float fEndX = 1.0f;
float fStartY = 1.0f;
float fEndY = -1.0f;
float fStartU = 0.0f;
float fEndU = 1.0f;
float fStartV = 0.0f;
float fEndV = 1.0f;
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fStartY, 0, fEndU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fEndY, 0, fStartU, fEndV));
IA Layout: (Update)
D3D10_INPUT_ELEMENT_DESC ieDesc[2] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0,12, D3D10_INPUT_PER_VERTEX_DATA, 0 }
};
Data reaches the vertex shader in the following format: (Update)
struct VS_INPUT
{
float3 fPos :POSITION;
float3 fTexcoord :TEXCOORD0;
}
Within my Vertex and Pixel shader not a lot is happening for this particular draw call, the pixel shader does most of the work sampling from a texture using the specified UV coordinates. However, this isn't working quite as expected, it appears that I am getting only 1 pixel of the sampled texture.
The workaround was in the pixel shader to do the following: (Update)
sampler s0 : register(s0);
Texture2DArray<float4> meshTex : register(t0);
float4 psMain(in VS_OUTPUT vOut) : SV_TARGET
{
float4 Color;
vOut.fTexcoord.z = 0;
vOut.fTexcoord.x = vOut.fPosObj.x * 0.5f;
vOut.fTexcoord.y = vOut.fPosObj.y * 0.5f;
vOut.fTexcoord.x += 0.5f;
vOut.fTexcoord.y += 0.5f;
Color = quadTex.Sample(s0, vOut.fTexcoord);
Color.a = 1.0f;
return Color;
}
It was also worth noting that this worked with the following VS out struct defined in the shaders:
struct VS_OUTPUT
{
float4 fPos :POSITION0; // SV_POSITION wont work in this case
float3 fTexcoord :TEXCOORD0;
}
Now I have a texture that's stretched to fit the entire screen, both triangles already cover this, but why did the texture UV's not get used as expected?
To clarify I am using a point sampler and have tried both clamp and wrapping UV.
I was a bit curious and found a solution / workaround mentioned above, however I'd prefer not to have to do it if anyone knows why it's happening?
What semantics are you specifying for your vertex-type? Are they properly aligned with your vertices and also your shader? If you are using a D3DXVECTOR4, D3DXVECTOR3 setup (as shown in your VS code) this could be a problem if your CreateVertex() returns a D3DXVECTOR3, D3DXVECTOR2 struct.
It would be reassuring to see your pixel-shader code too.
Okay well, for one, the texture coordinates outside of 0..1 range get clamped. I made the mistake of assuming that by going to clip space -1 to +1 that the texture coordinates would be too. This is not the case, they still go from 0.0 to 1.0.
The reason why the code in the pixel shader worked, was because it was using the clip space x,y,z coordinates to automatically overwrite these texture coordinates; an oversight on my part. However, the pixel shader code results in texture stretch on a full screen 'quad', so it might be useful to someone somewhere ;)

Texture Sampling in Open GL

i need to get the color at a particular coordinate from a texture. There are 2 ways i can do this, by getting and looking at the raw png data, or by sampling my generated opengl texture. Is it possible to sample an opengl texture to get the color (RGBA) at a given UV or XY coord? If so, how?
Off the top of my head, your options are
Fetch the entire texture using glGetTexImage() and check the texel you're interested in.
Draw the texel you're interested in (eg. by rendering a GL_POINTS primitive), then grab the pixel where you rendered it from the framebuffer by using glReadPixels.
Keep a copy of the texture image handy and leave OpenGL out of it.
Options 1 and 2 are horribly inefficient (although you could speed 2 up somewhat by using pixel-buffer-objects and doing the copy asynchronously). So my favourite by FAR is option 3.
Edit: If you have the GL_APPLE_client_storage extension (ie. you're on a Mac or iPhone) then that's option 4 which is the winner by a long way.
The most efficient way I've found to do it is to access the texture data (you should have our PNG decoded to make into a texture anyway) and interpolate between the texels yourself. Assuming your texcoords are [0,1], multiply texwidthu and texheightv and then use that to find the position on the texture. If they're whole numbers, just use the pixel directly, otherwise use the int parts to find the bordering pixels and interpolate between them based on the fractional parts.
Here's some HLSL-like psuedocode for it. Should be fairly clear:
float3 sample(float2 coord, texture tex) {
float x = tex.w * coord.x; // Get X coord in texture
int ix = (int) x; // Get X coord as whole number
float y = tex.h * coord.y;
int iy = (int) y;
float3 x1 = getTexel(ix, iy); // Get top-left pixel
float3 x2 = getTexel(ix+1, iy); // Get top-right pixel
float3 y1 = getTexel(ix, iy+1); // Get bottom-left pixel
float3 y2 = getTexel(ix+1, iy+1); // Get bottom-right pixel
float3 top = interpolate(x1, x2, frac(x)); // Interpolate between top two pixels based on the fractional part of the X coord
float3 bottom = interpolate(y1, y2, frac(x)); // Interpolate between bottom two pixels
return interpolate(top, bottom, frac(y)); // Interpolate between top and bottom based on fractional Y coord
}
As others have suggested, reading back a texture from VRAM is horribly inefficient and should be avoided like the plague if you're even remotely interested in performance.
Two workable solutions as far as I know:
Keep a copy of the pixeldata handy (wastes memory though)
Do it using a shader