OpenGL - Easiest way to draw a square with a texture - c++

Usually I draw a square with a texture like this:
Create a VBO with 4 coordinates (A,B,C,D for the square)
Create a EBO with 4 indices (A,C,D and B,C,D) telling that I want to draw a square out of 2 triangles.
Draw this elements with a texture
Isn't there an easiest way without having a EBO array?
Because it is not very handy to use... If I want to use like this:
VAO = [-0.8f, 0.5f, 0.0f, ...]
EBO = [0, 1, 3, 1, 2, 3, ...]
Then I need to remove a square from my VAO... then I also need to remove the indices from my EBO array and re-arrange it.
Is there a better way to do this?

If you really only want to draw a square with a texture on it, you should consider make a new empty VAO, and just call glDrawArrays(GL_TRIANGLE_STRIP, 0,3);
The vertex shader then looks like this:
out vec2 mapping;
void main()
{
float size = 1.0f;
vec2 offset;
switch(gl_VertexID)
{
case 0:
//Bottom-left
mapping = vec2(0.0f, 0.0f);
offset = vec2(-size, -size);
break;
case 1:
//Top-left
mapping = vec2(0.0f, 1.0f);
offset = vec2(-size, size);
break;
case 2:
//Bottom-right
mapping = vec2(1.0, 0.0);
offset = vec2(size, -size);
break;
case 3:
//Top-right
mapping = vec2(1.0, 1.0);
offset = vec2(size, size);
break;
}
gl_Position = vec4(offset, 0.0f, 1.0f);
}
The mapping variable tells the fragmentshader what the texture coordinates are.

Isn't there an easiest way without having a EBO array?
Duplicate your vertices & use glDrawArrays().

You can use DrawArray to plot the indices.
Something like this:
Vertex2D* vertex = (Vertex2D*) vbo->lock();
vertex[0].x = x[0]; vertex[0].y = y[0]; vertex[0].u = u[0]; vertex[0].v = v[0]; vertex[0].color = color;
vertex[1].x = x[0]; vertex[1].y = y[1]; vertex[1].u = u[0]; vertex[1].v = v[1]; vertex[1].color = color;
vertex[2].x = x[1]; vertex[2].y = y[1]; vertex[2].u = u[1]; vertex[2].v = v[1]; vertex[2].color = color;
vertex[3].x = x[1]; vertex[3].y = y[0]; vertex[3].u = u[1]; vertex[3].v = v[0]; vertex[3].color = color;
vbo->unlock();
shader->bind();
vbo->bind();
vao->bind();
tex->bind();
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
tex->unbind();
vao->unbind();
vbo->unbind();
shader->unbind();

Related

How to draw a rotated texture including text on top of output texture buffer using OpenGL

I have developed an opengl application where we draw strings of text using freetype and opengl.
I want to achieve rotation capability for the text that I put on OpenGL window.
For instance, "This is a text" string should be calculated and put into a buffer on a plain background and then refactored with a rotation value, so that the text will be visible as such below
I also have a text background that is just a regular texture with a buffer. I manually fill this background with a uint8_t buffer which can contain anything ranging from a single colour to an image buffer.
struct Background{
Color color;
Texture* bg_texture;
int x, y;
int w, h;
uint8_t* buffer;
explicit Background(int x, int y):x(x), y(y)
{
};
void create_bg_buffer();
~Background()
{
free(buffer);
}
};
void Background::create_bg_buffer()
{
int w = this->w;
int h = this->h;
if (posix_memalign((void**)&this->buffer, 128, w * h * 4) != 0)
{
VI_ERROR("ERROR::FREETYTPE: Couldn't allocate frame buffer ");
}
int c = 0;
for ( int i = 0; i < w; i++ )
{
for ( int j = 0; j < h; j++ )
{
this->buffer[ c + 0 ] = this->color.get_color_char(Utils::RED);
this->buffer[ c + 1 ] = this->color.get_color_char(Utils::GREEN);
this->buffer[ c + 2 ] = this->color.get_color_char(Utils::BLUE);
this->buffer[ c + 3 ] = 0xFF;
c += 4;
}
}
}
I want users to be able to rotate this text with it's background with a given angle. In on itself, rotating this is a tedious task. So I want to draw the text inside the backgrounds buffer itself, and then rotate it.
Please note that the way I rotate a background, for different reasons is not using an opengl function but rather taking the rectangle's middle point and rotating each point manually and passing those points to opengl with this code:
cpp
...
GLfloat vertices[32] = {
// positions // colors // texture coords
pos.TR_x, pos.TR_y, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top right
pos.BR_x, pos.BR_y, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, // bottom right
pos.BL_x, pos.BL_y, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom left
pos.TL_x, pos.TL_y, 1.0f, 0.1f, 0.1f, 0.1f, 0.0f, 0.0f // top left
};
unsigned int indices[] = {
0, 1, 3, // first triangle
1, 2, 3 // second triangle
};
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glGenBuffers(1, &EBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
...
Every pos stands for a rotated position, with labels indicating positions such as TR stands for top-right.
We want to use a Framebuffer for the output buffer. Then we want to use this framebuffer to be used for actual OpenGL output.
How should we alter the render_text function so that it will use the framebuffer to prepare the string from each individual character.
void Text::render_text(float angle_rad, bool has_bg)
{
if(has_bg) background->bg_texture->render(background->w, background->h, background->buffer, 1);
int start_y = ty + background->h;
start_y = ( std::abs(start_y - SCR_HEIGHT) / 2);
int total_h_index = 0;
for(auto& line: lines)
{
line.y = start_y;
line.x = tx;
total_h_index += line.total_height + LINE_GAP;
calc_pos(line.x, line.y, line.total_width, line.total_height, total_h_index);
for (c = line.text.begin(); c != line.text.end(); c++)
{
Character ch = Characters[*c];
line.char_h.push_back(ch.Size.y);
line.chars_y.push_back( line.y - (ch.Size.y - ch.Bearing.y) );
}
}
// glEnable(GL_CULL_FACE);
// glDisable(GL_BLEND);
// glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
shader.use();
glUniform3f(glGetUniformLocation(shader.ID, "textColor"), color.r, color.g, color.b);
glActiveTexture(GL_TEXTURE0);
glBindVertexArray(VAO);
GLfloat vertices[6][4] = {
{ 0.0, 1.0, 0.0, 0.0 },
{ 0.0, 0.0, 0.0, 1.0 },
{ 1.0, 0.0, 1.0, 1.0 },
{ 0.0, 1.0, 0.0, 0.0 },
{ 1.0, 0.0, 1.0, 1.0 },
{ 1.0, 1.0, 1.0, 0.0 }
};
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices); // Be sure to use glBufferSubData and not glBufferData
glBindBuffer(GL_ARRAY_BUFFER, 0);
GLint transition_loc = glGetUniformLocation(shader.ID, "transparency");
glUniform1f(transition_loc, 1.0f);
for(auto& line: lines)
{
GLfloat char_x = 0.0f;
std::string str = line.text;
glm::mat4 transOriginM = glm::translate(glm::mat4(1.0f), glm::vec3(line.x, line.y, 0));
glm::mat4 rotateM = glm::rotate(glm::mat4(1.0f), glm::radians(-angle_rad), glm::vec3(0.0f, 0.0f, 1.0f));
int e = 0;
std::vector<glm::vec2> rotated_pos = calc_rotation(line.chars_x, line.chars_y, -angle_rad, line.total_width);
for (c = str.begin(); c != str.end(); c++)
{
Character ch = Characters[*c];
GLfloat w = ch.Size.x;
GLfloat h = ch.Size.y;
GLfloat xrel = rotated_pos[e].x ; // char_x
GLfloat yrel = rotated_pos[e].y;
// Now advance cursors for next glyph (note that advance is number of 1/64 pixels)
e++; // Bitshift by 6 to get value in pixels (2^6 = 64 (divide amount of 1/64th pixels by 64 to get amount of pixels))
glm::mat4 transRelM = glm::translate(glm::mat4(1.0f), glm::vec3(xrel, yrel, 0));
glm::mat4 scaleM = glm::scale(glm::mat4(1.0f), glm::vec3(w, h, 1.0f));
// Keep the translation matrix that sets the position of the text before the rotation matrix
glm::mat4 modelM = transOriginM * transRelM * rotateM * scaleM;
GLint model_loc = glGetUniformLocation(shader.ID, "model");
glUniformMatrix4fv(model_loc, 1, GL_FALSE, glm::value_ptr(modelM));
// Render glyph texture over quad
glBindTexture(GL_TEXTURE_2D, ch.TextureID);
// Render quad
glDrawArrays(GL_TRIANGLES, 0, 6);
}
}
As of now, "Adding a character or text" is completely independent from the background operation.
They are just positioned in a way, so it looks like it has a background.
Our aim is to use a single output buffer that will hold both background color and freetype text data.
Following is how we handle the texture and texture rotation mechanism :
#define _VERTICIZE_X(number, global) _VERTICIZE(number, global) - 1
#define _VERTICIZE_Y(number, global) _VERTICIZE(number, global) + 1
namespace OpenGL
{
Texture::Texture(int x, int y, int w, int h, int gw, int gh, float angle)
{
Utils::Point rotatedPoints[4] = {
{x, y},
{x + w, y},
{x, y + h},
{x + w, y + h},
};
Utils::RotateRectangle(rotatedPoints, angle);
pos.TL_x = _VERTICIZE_X(rotatedPoints[0].x, gw); pos.TL_y = -_VERTICIZE_Y(rotatedPoints[0].y, gh);
pos.TR_x = _VERTICIZE_X(rotatedPoints[1].x, gw); pos.TR_y = -_VERTICIZE_Y(rotatedPoints[1].y, gh);
pos.BL_x = _VERTICIZE_X(rotatedPoints[2].x, gw); pos.BL_y = -_VERTICIZE_Y(rotatedPoints[2].y, gh);
pos.BR_x = _VERTICIZE_X(rotatedPoints[3].x, gw); pos.BR_y = -_VERTICIZE_Y(rotatedPoints[3].y, gh);
}
int Texture::init(float alpha, std::string* filter, Utils::Color proj_filt)
{
shader = Shader("./src/opengl/shaders/texture_shaders/texture.vs", "./src/opengl/shaders/texture_shaders/texture.fs");
void RotateRectangle(Point (&points)[4], float angle) {
// Calculate the center point
Point center = { 0 };
for (int i = 0; i < 4; i++) {
center.x += points[i].x;
center.y += points[i].y;
}
center.x /= 4;
center.y /= 4;
// Rotate each point
float angleRadians = angle * M_PI / 180.0f;
float s = sin(angleRadians);
float c = cos(angleRadians);
for (int i = 0; i < 4; i++) {
// Subtract the center point to get a vector from the center to the point
Point vector = { points[i].x - center.x, points[i].y - center.y };
// Rotate the vector
float x = vector.x;
float y = vector.y;
vector.x = x * c - y * s;
vector.y = x * s + y * c;
// Add the center point back to the rotated vector to get the new point
points[i].x = vector.x + center.x;
points[i].y = vector.y + center.y;
}
}
How can we use a framebuffer so that all OpenGL and FreeType operation are going to be executed in a single output space, and following that depending our way we can rotate the whole text using this single output framebuffer ?

How can I draw a circle in OpenGL Core 3.3 with orthographic projection?

I'm a complete beginner to OpenGL programming and am trying to follow the Breakout tutorial at learnopengl.com but would like to draw the ball as an actual circle, instead of using a textured quad like Joey suggests. However, every result that Google throws back at me for "draw circle opengl 3.3" or similar phrases seems to be at least a few years old, and using even-older-than-that versions of the API :-(
The closest thing that I've found is this SO question, but of course the OP just had to use a custom VertexFormat object to abstract some of the details, without sharing his/her implementation of such! Just my luck! :P
There's also this YouTube tutorial that uses a seemingly-older version of the API, but copying the code verbatim (except for the last few lines which is where the code looks old) still got me nowhere.
My version of SpriteRenderer::initRenderData() from the tutorial:
void SpriteRenderer::initRenderData() {
GLuint vbo;
auto attribSize = 0;
GLfloat* vertices = nullptr;
// Determine whether this sprite is a circle or
// quad and setup the vertices array accordingly
if (!this->isCircle) {
attribSize = 4;
vertices = new GLfloat[24] {...} // works for rendering quads
} else {
// This code is adapted from the YouTube tutorial that I linked
// above and is where things go pear-shaped for me...or at least
// **not** circle-shaped :P
attribSize = 3;
GLfloat x = 0.0f;
GLfloat y = 0.0f;
GLfloat z = 0.0f;
GLfloat r = 100.0f;
GLint numSides = 6;
GLint numVertices = numSides + 2;
GLfloat* xCoords = new GLfloat[numVertices];
GLfloat* yCoords = new GLfloat[numVertices];
GLfloat* zCoords = new GLfloat[numVertices];
xCoords[0] = x;
yCoords[0] = y;
zCoords[0] = z;
for (auto i = 1; i < numVertices; i++) {
xCoords[i] = x + (r * cos(i * (M_PI * 2.0f) / numSides));
yCoords[i] = y + (r * sin(i * (M_PI * 2.0f) / numSides));
zCoords[i] = z;
}
vertices = new GLfloat[numVertices * 3];
for (auto i = 0; i < numVertices; i++) {
vertices[i * 3] = xCoords[i];
vertices[i * 3 + 1] = yCoords[i];
vertices[i * 3 + 2] = zCoords[i];
}
}
// This is where I go back to the learnopengl.com code. Once
// again, the following works for quads but not circles!
glGenVertexArrays(1, &vao);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 24 * sizeof(
GLfloat), vertices, GL_STATIC_DRAW);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, attribSize, GL_FLOAT, GL_FALSE,
attribSize * sizeof(GLfloat), (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
And here's the SpriteRenderer::DrawSprite() method (the only difference from the original being lines 24 - 28):
void SpriteRenderer::Draw(vec2 position, vec2 size, GLfloat rotation, vec3 colour) {
// Prepare transformations
shader.Use();
auto model = mat4(1.0f);
model = translate(model, vec3(position, 0.0f));
model = translate(model, vec3(0.5f * size.x, 0.5f * size.y, 0.0f)); // Move origin of rotation to center
model = rotate(model, rotation, vec3(0.0f, 0.0f, 1.0f)); // Rotate quad
model = translate(model, vec3(-0.5f * size.x, -0.5f * size.y, 0.0f)); // Move origin back
model = scale(model, vec3(size, 1.0f)); // Lastly, scale
shader.SetMatrix4("model", model);
// Render textured quad
shader.SetVector3f("spriteColour", colour);
glActiveTexture(GL_TEXTURE0);
texture.Bind();
glBindVertexArray(vao);
if (!isCircular) {
glDrawArrays(GL_TRIANGLES, 0, 6);
} else {
glDrawArrays(GL_TRIANGLE_FAN, 0, 24); // also tried "12" and "8" for the last param, to no avail
}
glBindVertexArray(0);
}
And finally, the shaders (different to the ones used for quads):
// Vertex shader
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 model;
uniform mat4 projection;
void main() {
gl_Position = projection * model *
vec4(position.xyz, 1.0f);
}
// Fragment shader
#version 330 core
out vec4 colour;
uniform vec3 spriteColour;
void main() {
colour = vec4(spriteColour, 1.0);
}
P.S. I know I could just use a quad but I'm trying to learn how to draw all primitives in OpenGL, not just quads and triangles (thanks anyway Joey)!
P.P.S I just realised that the learnopengl.com site has a whole section devoted to debugging OpenGL apps, so I set that up but to no avail :-( I don't think the error handling is supported by my driver (Intel UHD Graphics 620 latest driver) since the GL_CONTEXT_FLAG_DEBUG_BIT was not set after following the instructions:
Requesting a debug context in GLFW is surprisingly easy as all we have to do is pass a hint to GLFW that we'd like to have a debug output context. We have to do this before we call glfwCreateWindow:
glfwWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, GL_TRUE);
Once we initialize GLFW we should have a debug context if we're using OpenGL version 4.3 or higher, or else we have to take our chances and hope the system is still able to request a debug context. Otherwise we have to request debug output using its OpenGL extension(s).
To check if we successfully initialized a debug context we can query OpenGL:
GLint flags; glGetIntegerv(GL_CONTEXT_FLAGS, &flags);
if (flags & GL_CONTEXT_FLAG_DEBUG_BIT) {
// initialize debug output
}
That if statement is never entered into!
Thanks to #Mykola's answer to this question I have gotten half-way there:
numVertices = 43;
vertices = new GLfloat[numVertices];
auto i = 2;
auto x = 0.0f,
y = x,
z = x,
r = 0.3f;
auto numSides = 21;
auto TWO_PI = 2.0f * M_PI;
auto increment = TWO_PI / numSides;
for (auto angle = 0.0f; angle <= TWO_PI; angle += increment) {
vertices[i++] = r * cos(angle) + x;
vertices[i++] = r * sin(angle) + y;
}
Which gives me .
Two questions I still have:
Why is there an extra line going from the centre to the right side and how can I fix it?
According to #user1118321's comment on a related SO answer, I should be able to prepend another vertex to the array at (0, 0) and use GL_TRIANGLE_FAN instead of GL_LINE_LOOP
to get a coloured circle. But this results in no output for me :-( Why?

OpenGL ES Drawing Shapes

I'm learning OpenGL ES to render for native android development. I'm able to get a triangle to draw. But I cannot get more than one triangle to draw. I just want to draw a rectangle, but if I tell the glVertexPointer function more than 3 vertices then it does not draw. I tried using GL_TRIANGLES and GL_TRIANGLE_STRIP.
What am I doing wrong?
struct Quad
{
GLfloat Vertices[18];
const GLbyte nNumVerts;
Quad(GLfloat i_fWidth, GLfloat i_fHeight) : nNumVerts(6)
{
GLfloat wide = i_fWidth / 2;
GLfloat high = i_fHeight / 2;
Vertices[0] = -wide; Vertices[1] = high; Vertices[2] = 0.0f;
Vertices[3] = -wide; Vertices[4] = -high; Vertices[5] = 0.0f;
Vertices[6] = wide; Vertices[7] = -high; Vertices[8] = 0.0f;
Vertices[9] = -wide; Vertices[10] = high; Vertices[11] = 0.0f;
Vertices[12] = wide; Vertices[13] = -high; Vertices[14] = 0.0f;
Vertices[15] = wide; Vertices[16] = high; Vertices[17] = 0.0f;
}
};
void Renderer::Render()
{
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
// If I change "m_Quad.nNumVerts" to '3' instead of '6' it will draw a triangle
// Anything higher than '3' and it doesn't draw anything
glVertexPointer(m_Quad.nNumVerts, GL_FLOAT, 0, m_Quad.Vertices);
glDrawArrays(GL_TRIANGLES, 0, m_Quad.nNumVerts);
eglSwapBuffers(m_pEngine->pDisplay, m_pEngine->pSurface);
}
The first argument of glVertexPointer is the number of values per vertex, it should stays in 3 in this case.
glVertexPointer(3, GL_FLOAT, 0, m_Quad.Vertices);

HLSL Particle system will not display

I have been trying to add a particle system to my Directx11 graphics demo, and so i have been using the 'Introduction to 3d Game Programming with directx 11' book.
Because of this am attempting to use a HLSL StreamOut technique to update the particle system and a separate technique to render the particles.
Below is the HLSL code for the particle system, i have tried adjusting the acceleration and velocity in the off chance the speed was sending the particles off the screen, however this had no effect.
cbuffer cbPerFrame
{
float3 gEyePosW;
float3 gEmitPosW;
float3 gEmitDirW;
float gGameTime;
float gTimeStep;
float4x4 gViewProj;
};
cbuffer cbFixed
{
// Net constant acceleration used to accerlate the particles.
float3 gAccelW = {0.0f, 0.2f, 0.0f};
// Texture coordinates for billbording are always the same - we use a qquad in this effect :)
float2 gTexC[4] =
{
float2(0.0f, 1.0f),
float2(0.0f, 0.0f),
float2(1.0f, 1.0f),
float2(1.0f, 0.0f)
};
};
// Nonnumeric values cannot be added to a cbuffer.
Texture2DArray gTextureMapArray;
// Random texture used to generate random numbers in shaders.
Texture1D gRandomTexture;
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = WRAP;
AddressV = WRAP;
};
DepthStencilState DisableDepth
{
DepthEnable = FALSE;
DepthWriteMask = ZERO;
};
DepthStencilState NoDepthWrites
{
DepthEnable = TRUE;
DepthWriteMask = ZERO;
};
BlendState AdditiveBlending
{
AlphaToCoverageEnable = FALSE;
BlendEnable[0] = TRUE;
SrcBlend = SRC_ALPHA;
DestBlend = ONE;
BlendOp = ADD;
SrcBlendAlpha = ZERO;
DestBlendAlpha = ZERO;
BlendOpAlpha = ADD;
RenderTargetWriteMask[0] = 0x0F;
};
///////////////////////////////////////////////////////////////
// Helper functions
//
///////////////////////////////////////////////////////////////
float3 RandUnitVec3(float offset)
{
// Use game time plus offset to sample random texture.
float u = (gGameTime + offset);
// coordinates in [-1,1]
float3 v = gRandomTexture.SampleLevel(samLinear, u, 0).xyz;
// project onto unit sphere (Normalize)
return normalize(v);
}
///////////////////////////////////////////////////////////////
// Stream Out Technique
//
///////////////////////////////////////////////////////////////
#define PT_EMITTER 0
#define PT_FLARE 1
struct Particle
{
float3 InitPosW : POSITION;
float3 InitVelW : VELOCITY;
float2 SizeW : SIZE;
float Age : AGE;
uint Type : Type;
};
Particle StreamOutVS(Particle vin)
{
return vin;
}
// The stream-out GS is just responsible for emitting
// new particles and destroying old particles. The logic
// programed here will generally vary from particle system
// to particle system, as the destroy/spawn rules will be
// different.
[maxvertexcount(2)]
void StreamOutGS(point Particle gin[1],
inout PointStream<Particle> ptStream)
{
gin[0].Age += gTimeStep;
// if particle is emitter particle
if (gin[0].Type == PT_EMITTER)
{
// If it's time to emit new particle
if (gin[0].Age > 0.005f)
{
float3 vRandom = RandUnitVec3(0.0f);
vRandom.x *= 0.5f;
vRandom.z *= 0.5f;
Particle p;
p.InitPosW = gEmitPosW.xyz;
p.InitVelW = 0.5f*vRandom;
p.SizeW = float2(3.0f, 3.0f);
p.Age = 0.0f;
p.Type = PT_FLARE;
ptStream.Append(p);
// reset the time to emit
gin[0].Age = 0.0f;
}
// always keep emitters
ptStream.Append(gin[0]);
}
else
{
// Set conditions to keep a particle - in this case age limit
if (gin[0].Age <= 1.0f)
{
ptStream.Append(gin[0]);
}
}
}
GeometryShader gsStreamOut = ConstructGSWithSO(
CompileShader( gs_5_0, StreamOutGS() ),
"POSITION.xyz; VELOCITY.xyz; SIZE.xyz; AGE.x; TYPE.x" );
technique11 StreamOutTech
{
pass P0
{
SetVertexShader( CompileShader( vs_5_0, StreamOutVS() ) );
SetGeometryShader( gsStreamOut );
// disable pixel shader for stream-out only
SetPixelShader(NULL);
// we must also disable the depth buffer for stream-out only
SetDepthStencilState( DisableDepth, 0 );
}
}
///////////////////////////////////////////////////////////////
// Draw Technique
//
///////////////////////////////////////////////////////////////
struct VertexIn
{
float3 Pos : POSITION;
float2 SizeW : SIZE;
};
struct VertexOut
{
float3 PosW : POSITION;
float2 SizeW : SIZE;
float4 Colour : COLOR;
uint Type : TYPE;
};
VertexOut DrawVS(Particle vin)
{
VertexOut vout;
float t = vin.Age;
// constant Acceleration equation
vout.PosW = 0.5f*t*t*gAccelW + t*vin.InitVelW + vin.InitPosW;
// fade colour with time
float opacity = 1.0f - smoothstep(0.0f, 1.0f, t/1.0f);
vout.Colour = float4(1.0f, 1.0f, 1.0f, opacity);
vout.SizeW = vin.SizeW;
vout.Type = vin.Type;
return vout;
}
struct GeoOut
{
float4 PosH : SV_POSITION;
float4 Colour : COLOR;
float2 Tex : TEXCOORD;
};
// Expand each 'Point' into a quad (4 verticies)
[maxvertexcount(4)]
void DrawGS(point VertexOut gin[1],
inout TriangleStream<GeoOut> triStream)
{
// Do not draw Emiter particles in this system
if (gin[0].Type != PT_EMITTER)
{
//
// Compute world matrix so that billboard faces the camera.
//
float3 look = normalize(gEyePosW.xyz - gin[0].PosW);
float3 right = normalize(cross(float3(0,1,0), look));
float3 up = cross(look, right);
//
// Compute triangle strip vertices (quad) in world space.
//
float halfWidth = 0.5f*gin[0].SizeW.x;
float halfHeight = 0.5f*gin[0].SizeW.y;
float4 v[4];
v[0] = float4(gin[0].PosW + halfWidth*right - halfHeight*up, 1.0f);
v[1] = float4(gin[0].PosW + halfWidth*right + halfHeight*up, 1.0f);
v[2] = float4(gin[0].PosW - halfWidth*right - halfHeight*up, 1.0f);
v[3] = float4(gin[0].PosW - halfWidth*right + halfHeight*up, 1.0f);
//
// Transform quad vertices to world space and output
// them as a triangle strip.
//
GeoOut gout;
[unroll]
for(int i = 0; i < 4; ++i)
{
gout.PosH = mul(v[i], gViewProj);
gout.Tex = gTexC[i];
gout.Colour = gin[0].Colour;
triStream.Append(gout);
}
}
}
float DrawPS(GeoOut pin) : SV_TARGET
{
return gTextureMapArray.Sample(samLinear, float3(pin.Tex, 0)) * pin.Colour;
}
technique11 DrawTech
{
pass P0
{
SetVertexShader( CompileShader( vs_5_0, DrawVS() ) );
SetGeometryShader( CompileShader( gs_5_0, DrawGS() ) );
SetPixelShader( CompileShader( ps_5_0, DrawPS() ) );
SetBlendState(AdditiveBlending, float4(0.0f, 0.0f, 0.0f, 0.0f), 0xffffffff);
SetDepthStencilState( NoDepthWrites, 0 );
}
}
Below is the code for building the VB.
void ParticleSystem::BuildVB(ID3D11Device* device)
{
/////////////////////////////////////////////////////////
// Create the buffer to start the particle system.
/////////////////////////////////////////////////////////
D3D11_BUFFER_DESC vbd;
vbd.Usage = D3D11_USAGE_DEFAULT;
vbd.ByteWidth = sizeof(Vertex::Particle) * 1;
vbd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vbd.CPUAccessFlags = 0;
vbd.MiscFlags = 0;
vbd.StructureByteStride = 0;
// The initial particle emitter has type 0 and age 0. The rest
// of the particle attributes do not apply to an emitter.
Vertex::Particle p;
ZeroMemory(&p, sizeof(Vertex::Particle));
p.Age = 0.0f;
p.Type = 0;
D3D11_SUBRESOURCE_DATA vinitData;
vinitData.pSysMem = &p;
HR(device->CreateBuffer(&vbd, &vinitData, &mInitVB));
//////////////////////////////////////////////////////////////
// Create the buffers which swap back and forth for stream-out and drawing.
//////////////////////////////////////////////////////////////
vbd.ByteWidth = sizeof(Vertex::Particle) * mMaxParticles;
vbd.BindFlags = D3D11_BIND_VERTEX_BUFFER | D3D11_BIND_STREAM_OUTPUT;
HR(device->CreateBuffer(&vbd, 0, &mDrawVB));
HR(device->CreateBuffer(&vbd, 0, &mStreamOutVB));
}
And now the Draw cpp code.
void ParticleSystem::Draw(ID3D11DeviceContext* dc, const XMMATRIX& viewProj)
{
//
// Set constants.
//
mFX->SetViewProj(viewProj);
mFX->SetGameTime(mGameTime);
mFX->SetTimeStep(mTimeStep);
mFX->SetEyePosW(mEyePosW);
mFX->SetEmitPosW(mEmitPosW);
mFX->SetEmitDirW(mEmitDirW);
mFX->SetTexArray(mTextureArraySRV);
mFX->SetRandomTex(mRandomTextureSRV);
//
// Set IA stage.
//
dc->IASetInputLayout(InputLayouts::Particle);
dc->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST);
UINT stride = sizeof(Vertex::Particle);
UINT offset = 0;
// On the first pass, use the initialization VB. Otherwise, use
// the VB that contains the current particle list.
if( mFirstRun )
dc->IASetVertexBuffers(0, 1, &mInitVB, &stride, &offset);
else
dc->IASetVertexBuffers(0, 1, &mDrawVB, &stride, &offset);
//
// Draw the current particle list using stream-out only to update them.
// The updated vertices are streamed-out to the target VB.
//
dc->SOSetTargets(1, &mStreamOutVB, &offset);
D3DX11_TECHNIQUE_DESC techDesc;
mFX->StreamOutTech->GetDesc( &techDesc );
for(UINT p = 0; p < techDesc.Passes; ++p)
{
mFX->StreamOutTech->GetPassByIndex( p )->Apply(0, dc);
if(mFirstRun)
{
dc->Draw(1, 0);
mFirstRun = false;
}
else
{
dc->DrawAuto();
}
}
// done streaming-out--unbind the vertex buffer
ID3D11Buffer* bufferArray[1] = {0};
dc->SOSetTargets(1, bufferArray, &offset);
// ping-pong the vertex buffers
std::swap(mDrawVB, mStreamOutVB);
//
// Draw the updated particle system we just streamed-out.
//
dc->IASetVertexBuffers(0, 1, &mDrawVB, &stride, &offset);
mFX->DrawTech->GetDesc( &techDesc );
for(UINT p = 0; p < techDesc.Passes; ++p)
{
mFX->DrawTech->GetPassByIndex( p )->Apply(0, dc);
dc->DrawAuto();
}
}
I had thought that perhaps either a blend state or Depth state in use by some of the other objects in my scene may be causing the issue, (perhaps i have misunderstood something I have set earlier). I tried removing all other render code leaving just the draw code above, but with no results.
To my mind, I can only think of a few possible causes for the issue i am having, but so far i am unable to find a solution.
The scale of the system is wrong for my scene. eg. the particles are drawing but are moving off the screen to fast to be seen.
As mentioned above i have tried removing the acceleration and velocity of the particles in the HLSL code, in order to see stationary particles. This had no effect.
Blend state/Depth stencil state is incorrect. Eg. As mentioned above.
The Emitter particle is, for some reason, not being produced/placed correctly. causing no 'drawable' particles to be produced in turn.
As most of this code is in the .fx file, I am unable to step through to check the emitter particles. I think this is the more likely issue, but i have been wrong before.
Any help on this would be greatly appreciated, I am well and truly stuck on this.
Below I have added additional snippets of code which may be of use.
E.G. Input layout.
const D3D11_INPUT_ELEMENT_DESC InputLayoutDesc::Particle[5] =
{
{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"VELOCITY", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"SIZE", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"AGE", 0, DXGI_FORMAT_R32_FLOAT, 0, 32, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"TYPE", 0, DXGI_FORMAT_R32_UINT, 0, 36, D3D11_INPUT_PER_VERTEX_DATA, 0},
};
And particle system INIT
mFire.Init(md3dDevice, AppEffects::FireFX, mFlareTextureSRV, mRandomTextureSRV, 500);
mFire.SetEmitPos(XMFLOAT3(1.0f, 0.5f, 0.0f));
void ParticleSystem::Init(ID3D11Device* device, ParticleEffect* fx,
ID3D11ShaderResourceView* textureArraySRV,
ID3D11ShaderResourceView* randomTextureSRV,
UINT maxParticles)
{
mMaxParticles = maxParticles;
mFX = fx;
mTextureArraySRV = textureArraySRV;
mRandomTextureSRV = randomTextureSRV;
BuildVB(device);
}
If i have missed any section of the code you would need to see, let me know.
Thanks in advance.
Sorry about this stupid question all. Turns out it was a stupid mistake in the pixel shader. I had set the pixel shader as a float instead of float4.

Modern equivalent of `gluOrtho2d `

What is the modern equivalent of the OpenGL function gluOrtho2d? clang is giving me deprecation warnings. I believe I need to write some kind of vertex shader? What should it look like?
I started off this answer thinking "It's not that different, you just have to...".
I started writing some code to prove myself right, and ended up not really doing so. Anyway, here are the fruits of my efforts: a minimal annotated example of "modern" OpenGL.
There's a good bit of code you'll need before modern OpenGL will start to act like old-school OpenGL. I'm not going to get into the reasons why you might like to do it the new way (or not) -- there are countless other answers that give a pretty good rundown. Instead I'll post some minimal code that can get you running if you're so inclined.
You should end up with this stunning piece of art:
Basic Render Process
Part 1: Vertex buffers
void TestDraw(){
// create a vertex buffer (This is a buffer in video memory)
GLuint my_vertex_buffer;
glGenBuffers(1 /*ask for one buffer*/, &my_vertex_buffer);
const float a_2d_triangle[] =
{
200.0f, 10.0f,
10.0f, 200.0f,
400.0f, 200.0f
};
// GL_ARRAY_BUFFER indicates we're using this for
// vertex data (as opposed to things like feedback, index, or texture data)
// so this call says use my_vertex_data as the vertex data source
// this will become relevant as we make draw calls later
glBindBuffer(GL_ARRAY_BUFFER, my_vertex_buffer);
// allocate some space for our buffer
glBufferData(GL_ARRAY_BUFFER, 4096, NULL, GL_DYNAMIC_DRAW);
// we've been a bit optimistic, asking for 4k of space even
// though there is only one triangle.
// the NULL source indicates that we don't have any data
// to fill the buffer quite yet.
// GL_DYNAMIC_DRAW indicates that we intend to change the buffer
// data from frame-to-frame.
// the idea is that we can place more than 3(!) vertices in the
// buffer later as part of normal drawing activity
// now we actually put the vertices into the buffer.
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(a_2d_triangle), a_2d_triangle);
Part 2: Vertex Array Object:
We need to define how the data contained in my_vertex_array is structured. This state is contained in a vertex array object (VAO). In modern OpenGL there needs to be at least one of these
GLuint my_vao;
glGenVertexArrays(1, &my_vao);
//lets use the VAO we created
glBindVertexArray(my_vao);
// now we need to tell the VAO how the vertices in my_vertex_buffer
// are structured
// our vertices are really simple: each one has 2 floats of position data
// they could have been more complicated (texture coordinates, color --
// whatever you want)
// enable the first attribute in our VAO
glEnableVertexAttribArray(0);
// describe what the data for this attribute is like
glVertexAttribPointer(0, // the index we just enabled
2, // the number of components (our two position floats)
GL_FLOAT, // the type of each component
false, // should the GL normalize this for us?
2 * sizeof(float), // number of bytes until the next component like this
(void*)0); // the offset into our vertex buffer where this element starts
Part 3: Shaders
OK, we have our source data all set up, now we can set up the shader which will transform it into pixels
// first create some ids
GLuint my_shader_program = glCreateProgram();
GLuint my_fragment_shader = glCreateShader(GL_FRAGMENT_SHADER);
GLuint my_vertex_shader = glCreateShader(GL_VERTEX_SHADER);
// we'll need to compile the vertex shader and fragment shader
// and then link them into a full "shader program"
// load one string from &my_fragment_source
// the NULL indicates that the string is null-terminated
const char* my_fragment_source = FragmentSourceFromSomewhere();
glShaderSource(my_fragment_shader, 1, &my_fragment_source, NULL);
// now compile it:
glCompileShader(my_fragment_shader);
// then check the result
GLint compiled_ok;
glGetShaderiv(my_fragment_shader, GL_COMPILE_STATUS, &compiled_ok);
if (!compiled_ok){ printf("Oh Noes, fragment shader didn't compile!\n"); }
else{
glAttachShader(my_shader_program, my_fragment_shader);
}
// and again for the vertex shader
const char* my_vertex_source = VertexSourceFromSomewhere();
glShaderSource(my_vertex_shader, 1, &my_vertex_source, NULL);
glCompileShader(my_vertex_shader);
glGetShaderiv(my_vertex_shader, GL_COMPILE_STATUS, &compiled_ok);
if (!compiled_ok){ printf("Oh Noes, vertex shader didn't compile!\n"); }
else{
glAttachShader(my_shader_program, my_vertex_shader);
}
//finally, link the program, and set it active
glLinkProgram(my_shader_program);
glUseProgram(my_shader_program);
Part 4: Drawing things on the screen
//get the screen size
float my_viewport[4];
glGetFloatv(GL_VIEWPORT, my_viewport);
//now create a projection matrix
float my_proj_matrix[16];
MyOrtho2D(my_proj_matrix, 0.0f, my_viewport[2], my_viewport[3], 0.0f);
//"uProjectionMatrix" refers directly to the variable of that name in
// shader source
GLuint my_projection_ref =
glGetUniformLocation(my_shader_program, "uProjectionMatrix");
// send our projection matrix to the shader
glUniformMatrix4fv(my_projection_ref, 1, GL_FALSE, my_proj_matrix );
//clear the background
glClearColor(0.3, 0.4, 0.4, 1.0);
glClear(GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT);
// *now* after that tiny setup, we're ready to draw the best 24 bytes of
// vertex data ever.
// draw the 3 vertices starting at index 0, interpreting them as triangles
glDrawArrays(GL_TRIANGLES, 0, 3);
// now just swap buffers however your window manager lets you
}
And That's it!
... except for the actual
Shaders
I started to get a little tired at this point, so the comments are a bit lacking. Let me know if you'd like anything clarified.
const char* VertexSourceFromSomewhere()
{
return
"#version 330\n"
"layout(location = 0) in vec2 inCoord;\n"
"uniform mat4 uProjectionMatrix;\n"
"void main()\n"
"{\n"
" gl_Position = uProjectionMatrix*(vec4(inCoord, 0, 1.0));\n"
"}\n";
}
const char* FragmentSourceFromSomewhere()
{
return
"#version 330 \n"
"out vec4 outFragColor;\n"
"vec4 DebugMagenta(){ return vec4(1.0, 0.0, 1.0, 1.0); }\n"
"void main() \n"
"{\n"
" outFragColor = DebugMagenta();\n"
"}\n";
}
The Actual Question you asked: Orthographic Projection
As noted, the actual math is just directly from Wikipedia.
void MyOrtho2D(float* mat, float left, float right, float bottom, float top)
{
// this is basically from
// http://en.wikipedia.org/wiki/Orthographic_projection_(geometry)
const float zNear = -1.0f;
const float zFar = 1.0f;
const float inv_z = 1.0f / (zFar - zNear);
const float inv_y = 1.0f / (top - bottom);
const float inv_x = 1.0f / (right - left);
//first column
*mat++ = (2.0f*inv_x);
*mat++ = (0.0f);
*mat++ = (0.0f);
*mat++ = (0.0f);
//second
*mat++ = (0.0f);
*mat++ = (2.0*inv_y);
*mat++ = (0.0f);
*mat++ = (0.0f);
//third
*mat++ = (0.0f);
*mat++ = (0.0f);
*mat++ = (-2.0f*inv_z);
*mat++ = (0.0f);
//fourth
*mat++ = (-(right + left)*inv_x);
*mat++ = (-(top + bottom)*inv_y);
*mat++ = (-(zFar + zNear)*inv_z);
*mat++ = (1.0f);
}
Modern OpenGL is significantly different. You won't be able to just drop in a new function. Read up...
http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-1:-The-Graphics-Pipeline.html
http://www.arcsynthesis.org/gltut/index.html
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/