Depth testing is not working at all - c++

For some reason in my project my depth testing is not working. I have made sure it is enabled and it doesn't work. I know this because I can see certain faces being drawn over each other and different objects (cubes) in the scene are drawn over each other.
I am using the default framebuffer, so there should be depth. I also check gl_FragCoord.z and it returned the correct depth. I've went through my code thoroughly for ages, searched dozens of google pages and I still can't find the answer.
Here is the code presented in order of execution that is relevant to this question:
Init()
void Program::Init()
{
std::cout << "Initialising" << std::endl;
glViewport(0, 0, Options::width, Options::height);
glewExperimental = true; // Needed in core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return;
}
window.setKeyRepeatEnabled(false);
sf::Mouse::setPosition(sf::Vector2i(Options::width / 2, Options::height / 2), window);
LoadGameState(GameState::INGAME, false);
Run();
}
GLInit()
void Program::GLInit() {
lampShader = Shader("StandardVertex.shader", "LightingFragmentShader.shader");
ourShader = Shader("VertexShader.shader", "SimpleFragmentShader.shader");
screenShader = Shader("FrameVertexShader.shader", "FrameFragmentShader.shader");
skyboxShader = Shader("SkyboxVertex.shader", "SkyboxFragment.shader");
cubeDepthShader = Shader("CubeDepthVertex.shader", "CubeDepthFragment.shader", &std::string("CubeDepthGeometry.shader"));
debugDepthQuad = Shader("SimpleVertex.shader", "DepthFragment.shader");
blurShader = Shader("SimpleVertex.shader", "BloomFragment.shader");
bloomShader = Shader("SimpleVertex.shader", "FinalBloom.shader");
depthShader = Shader("VertexDepth.shader", "EmptyFragment.shader");
geometryPass = Shader("GeometryVertex.shader", "GeometryFragment.shader");
lightingPass = Shader("SimpleVertex.shader", "LightingFragment.shader");
shaderSSAO = Shader("SimpleVertex.shader", "SSAOFragment.shader");
shaderSSAOblur = Shader("SimpleVertex.shader", "SSAOBlurFragment.shader");
colliders = Shader("VertexShader.shader", "GreenFragment.shader");
glEnable(GL_DEPTH_TEST);
}
and another function that runs on repeat:
RunGame()
void Program::RunGame()
{
scene->mainCamera.Input(Options::deltaTime, window);
if(timeSinceGameStart.getElapsedTime().asSeconds()<2)
DoLights();
if(timeSinceGameStart.getElapsedTime().asSeconds() > 5.0f)
scene->DoPhysics();
scene->CheckCollisions();
glClearColor(0.32f, 0.5f, 0.58f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
projection = glm::mat4();
view = glm::mat4();
// view/projection transformations
projection = glm::perspective(glm::radians(Options::fov), (float)Options::width / (float)Options::height, 0.1f, 100.0f);
view = scene->mainCamera.GetViewMatrix();
lampShader.use();
lampShader.setMat4("projection", projection);
lampShader.setMat4("view", view);
lampShader.setInt("setting", Options::settings);
glm::mat4 model;
for (unsigned int i = 0; i < lights.size(); i++)
{
model = glm::mat4();
model = glm::translate(model, lights[i].position);
model = glm::scale(model, glm::vec3(0.125f));
lampShader.setMat4("model", model);
lampShader.setVec3("lightColor", lights[i].colour);
Scene::renderCube();
}
if (Options::showColliders) {
colliders.use();
colliders.setMat4("projection", projection);
colliders.setMat4("view", view);
scene->RenderColliders(colliders);
}
}
I have looked at lots of pages and I have done all the recommendations:
call glEnable(GL_DEPTH_TEST)
clear the depth buffer
make sure zNear is not a tiny number
not call functions that would disable depth like glDepthMask(GL_FALSE)
Any help if welcome. I hope this is enough code, if its not I'll give any requested code. Most of the functions not shown are pretty self explanatory. Also if you think all the code is fine above, please tell me in the comments so I know that the issue is not there.
I am obviously also using c++ and glew. I am also using SFML for my window.
Thanks for any answers
EDIT:
Code responsible for creating sfml window:
Program::Program()
:settings(24, 8, 4, 3, 0),
window(sf::VideoMode(Options::width, Options::height), "OpenGL", sf::Style::Default, settings)
{
Init();
}
Vertex shader for lights:
#version 330 core
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aNormal;
layout(location = 2) in vec2 aTexCoords;
out vec3 Normal;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main()
{
Normal = transpose(inverse(mat3(model))) * aNormal;
gl_Position = projection * view * model * vec4(aPos, 1.0);
}
Declaration of window and settings
sf::RenderWindow window;
sf::ContextSettings settings;

The problem is that in C++, members of a class are initialised in the order of their declaration, not in the order they are listed in any constructor's mem-initialiser list.
If you turn on enough warnings in your compiler, it should warn you that in the Program constructor you've shown, settings will be initialised after window. Which is precisely your problem, as settings is used before it gets the values you specified for it. Swap the order of the members in the class to resolve it.
The reason for the rule is that a fundamental C++ rule is "objects of automatic storage duration are always destroyed in the exact opposite order of their construction." A class's destructor therefore needs one order in which to destroy the members, regardless of which constructor was used to create the object. The order is therefore fixed to be that of declaration. Note that order of declaration is also used to control creation/destruction order in other contexts (such as function-local variables), so using it here is consistent.

Related

Multiple IGraphicsContext with OpenTK / using multiple OpenGL contexts in a single window

What I want to do:
The main goal: Use SkiaSharp and OpenTK together. Render 2D and 3D.
What is the problem: SkiaSharp messes up the state of OpenGL, so I can't use it for 3D without saving and restoring some states.
Old solution (with OpenGL < 4): I used GL.PushClientAttrib(ClientAttribMask.ClientAllAttribBits); + some additional values (saved/restored them).
Now i read that this is not necessary the best solution and OpenGL 4 does not have GL.PushClientAttrib anymore. The usual way seems to be that you should use a seperate OpenGL context.
Have seen already: OpenTK multiple GLControl with a single Context
I am not using GLControl because I am not using WinForms. So this is not really helpful. What I tried:
internal class Program
{
public static void Main(string[] args)
{
new Program().Run();
}
private readonly GameWindow _gameWindow;
private IGraphicsContext _context2;
private GlObject _glObject;
private int _programId;
private GlObject _glObject2;
private int _programId2;
public Program()
{
_gameWindow = new GameWindow(800,600,
GraphicsMode.Default, "", GameWindowFlags.Default,
DisplayDevice.Default,
4, 2, GraphicsContextFlags.ForwardCompatible);
_gameWindow.Resize += OnResize;
_gameWindow.RenderFrame += OnRender;
_gameWindow.Load += OnLoad;
}
public void Run()
{
_gameWindow.Run();
}
private void OnLoad(object sender, EventArgs e)
{
_programId = ShaderFactory.CreateShaderProgram();
_glObject = new GlObject(new[]
{
new Vertex(new Vector4(-0.25f, 0.25f, 0.5f, 1f), Color4.Black),
new Vertex(new Vector4(0.0f, -0.25f, 0.5f, 1f), Color4.Black),
new Vertex(new Vector4(0.25f, 0.25f, 0.5f, 1f), Color4.Black),
});
_context2 = new GraphicsContext(GraphicsMode.Default, _gameWindow.WindowInfo, 4, 2,
GraphicsContextFlags.Default);
_context2.MakeCurrent(_gameWindow.WindowInfo);
_programId2 = ShaderFactory.CreateShaderProgram();
_glObject2 = new GlObject(new[]
{
new Vertex(new Vector4(-0.25f, 0.25f, 0.5f, 1f), Color4.Yellow),
new Vertex(new Vector4(0.0f, -0.25f, 0.5f, 1f), Color4.Yellow),
new Vertex(new Vector4(0.25f, 0.25f, 0.5f, 1f), Color4.Yellow),
});
_gameWindow.MakeCurrent();
}
private void OnRender(object sender, FrameEventArgs e)
{
_gameWindow.Context.MakeCurrent(_gameWindow.WindowInfo);
GL.Viewport(0, 0, _gameWindow.Width, _gameWindow.Height);
GL.ClearColor(0.3f,0.1f,0.1f,1);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(_programId);
_glObject.Render();
GL.Flush();
_gameWindow.SwapBuffers();
// i tried different combinations here
// as i read GL.Clear will always clear the whole window
_context2.MakeCurrent(_gameWindow.WindowInfo);
GL.Viewport(10,10,100,100);
//GL.ClearColor(0f, 0.8f, 0.1f, 1);
//GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(_programId2);
_glObject2.Render();
GL.Flush();
_context2.SwapBuffers();
}
private void OnResize(object sender, EventArgs e)
{
var clientRect = _gameWindow.ClientRectangle;
GL.Viewport(0, 0, clientRect.Width, clientRect.Height);
}
}
Vertex shader:
#version 450 core
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
out vec4 vs_color;
void main(void)
{
gl_Position = position;
vs_color = color;
}
Fragment shader:
#version 450 core
in vec4 vs_color;
out vec4 color;
void main(void)
{
color = vs_color;
}
Works fine with a single context, when I use both contexts what happens is: first context gets rendered but flickers. There is no second triangle visible at all (as i understand GL.Viewport it should be visible on the lower left corner of the screen).
You could help me by answering one or more of the following questions:
Is there another way to restore the original context
Is there another way to render with HW acceleration on a part of the screen, ideally having specified OpenGL states for the specific area
How can I get the solution above getting to work the way I want (no flicker but a smaller screen inside a smaller portion of the window)
After trying some more combinations what did the trick was:
Call SwapBuffers only on the last used context in render (even when you use 3 contexts). Then no flicker will occur and rendering seems to work fine. State seems to be independent from each other.

Waving Flag Effect in Opengl (C++) [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
as a part of a project; I need to create a flag with waving effect as shown below:
https://i.stack.imgur.com/db5zB.gif
I couldn't manage to add wave effect so I removed the crescent & star and now trying to wave the flag itself.
I believe when I pass the time, it doesn't update so animation doesn't happen.
What I did so far is:
#include "Angel.h"
float PI = 3.14;
int verticeNumber = 0;
float time;
struct point {
GLfloat x;
GLfloat y;
};
point vertices[500];
// OpenGL initialization
void init()
{
// Create a vertex array object
vertices[0].x = -0.75;
vertices[0].y = 0.5;
vertices[1].x = 0.75;
vertices[1].y = 0.5;
vertices[2].x = 0.75;
vertices[2].y = -0.5;
vertices[3].x = -0.75;
vertices[3].y = -0.5;
vertices[4].x = -0.75;
vertices[4].y = 0.5;
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
GLuint buffer;
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Load shaders and use the resulting shader program
time = glutGet(GLUT_ELAPSED_TIME);
GLuint program = InitShader("vshader.glsl", "fshader.glsl");
glUseProgram(program);
// set up vertex arrays
GLuint vPosition = glGetAttribLocation(program, "vPosition");
glEnableVertexAttribArray(vPosition);
glVertexAttribPointer(vPosition, 2, GL_FLOAT, GL_FALSE, 0, 0);
// Paint the background
glClearColor(0.36, 0.74, 0.82, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 5);
glutSwapBuffers();
}
void display(void)
{
}
// Ends the program on ESC press.
void keyboard(unsigned char key, int x, int y)
{
switch (key) {
case 033:
exit(EXIT_SUCCESS);
break;
}
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH);
glutInitWindowSize(800, 800);
// OpenGL Version Check
glutInitContextVersion(3, 2);
glutInitContextProfile(GLUT_CORE_PROFILE);
// Name the window
glutCreateWindow("I'm Rick Harrison, and this is my pawn shop");
glewExperimental = GL_TRUE;
glewInit();
init();
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMainLoop();
return 0;
}
My shader files are:
#version 430
varying vec4 f_color;
void main(void) {
gl_FragColor = vec4(1,0,0,1);
}
and
#version 430
in vec4 vPosition;
in float time;
void main()
{
vec4 temp = vPosition;
temp.y = cos(0.1*time)*temp.y;
gl_Position = temp;
}
It results in this:
https://i.stack.imgur.com/MVSp0.png without any animations.
Time is needed to be updated that mentioned by user2927848. I just want to advise your waving effect. If you want to pass the time in your vertex shader, you need more than four vertices that you generated by yourself, because of your vertex shader only be called 4 times by pipleline, one of each vertex. so you will not get the beauty wave effect that you expected.
In conclusion, There are two ways for your flag waving smoothly.
Pass more vertex to Vertex Shader
Pass the time variable to Fragment Shader
In first suggestion, you need to generate maybe 100 * 50 vertices to make it smoothly, or maybe more to look better as you like.
The second suggestion also has a little issue, If your image is entirely fit in your plane, then you need to somehow let the image have some margin away from border. The easy way to slove this is to make your *.png image have some transparency margin at the border and do whatever your waving function at the uv value.
I just implement the simplest waving effect in the shadertoy.
Also put the code below, because it is short...
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
uv.y = uv.y + 0.1 * sin(iGlobalTime + 10.0 * uv.x);
vec4 textureColor = texture2D(iChannel0, uv);
fragColor = textureColor;
}
I don't see you updating 'time' each frame. https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/uniform.php
Inside your display function is empty so you never redraw the image after your 'init'. So even adjusting the time won't fix this by itself. You should do all your actually drawing / switching buffers in the display function.
Also doing it this way will move the entire item up and down. You should have cos(speed*(time + offset)) where offset is calculated based on the distance from the base of the flag. You'll likely need quite a few more vertices for the animation to be fluid so even if you get this working and it looks odd that is why.
There are quite a few issues but that should get you moving in the right direction.

Asynchronous texture upload with Qt and OpenGL

I'm writing a small video player using QOpenGLWidget. At the moment I'm struggling to get asynchronous texture upload working. In an earlier version of my code I wait for a "next frame" signal, upon which the frame is read from the hard drive, uploaded to the GPU and then rendered. Now I want to get this working asynchronously using a ring buffer on the GPU. I want a separate thread to upload the next N textures and the main thread to take one of this textures, display and invalidate it. As a first step I wrote a class to upload a single texture which I want to use from my QOpenGLWidget. I created shared contexts between my class and the QOpenGLWidget.
class GLWidget : public QOpenGLWidget, protected QOpenGLFunctions;
void GLWidget::paintGL() {
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_program->bind();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW);
m_program->setUniformValue("textureSamplerRed", 0);
m_program->setUniformValue("textureSamplerGreen", 1);
m_program->setUniformValue("textureSamplerBlue", 2);
glUniformMatrix4fv(m_matMVP_Loc, 1, GL_FALSE, &m_MVP[0][0]);
m_vertice_indices_Vbo.bind();
m_vertices_Vbo.bind();
m_texture_coordinates_Vbo.bind();
glDrawElements(
GL_TRIANGLE_STRIP, // mode
m_videoFrameTriangles_indices.size(), // count
GL_UNSIGNED_INT, // type
(void*)0 // element array buffer offset
);
m_program->release();
}
I wait for the GLWidget::initializeGL() to finish, emit a signal which is connected to the initialization of my texture loading class:
class TextureLoader2 : public QObject, protected QOpenGLFunctions;
void TextureLoader2::initialize(QOpenGLContext *context)
{
// sharing the OpenGL context with GLWidget
m_context.setFormat(context->format()); // need this?
m_context.setShareContext(context);
m_context.create();
m_context.makeCurrent(context->surface());
m_surface = context->surface();
}
And here is how I load a new frame:
void TextureLoader2::loadNextFrame(const int frameIdx)
{
QElapsedTimer timer;
timer.start();
bool is_current = m_context.makeCurrent(m_surface);
// some code which reads the frame from disk and sends to the GPU.
// srcR is a pointer to the the data for red. the upload for G and B is similar
if(!m_texture_Rdata)
{
m_texture_Rdata = std::make_shared<QOpenGLTexture>(QOpenGLTexture::Target2D);
m_texture_Rdata->create();
m_texture_Rdata->setSize(m_frameWidth,m_frameHeight);
m_texture_Rdata->setFormat(QOpenGLTexture::R8_UNorm);
m_texture_Rdata->allocateStorage(QOpenGLTexture::Red,QOpenGLTexture::UInt8);
m_texture_Rdata->setData(QOpenGLTexture::Red, QOpenGLTexture::UInt8, srcR);
// Set filtering modes for texture minification and magnification
m_texture_Rdata->setMinificationFilter(QOpenGLTexture::Nearest);
m_texture_Rdata->setMagnificationFilter(QOpenGLTexture::Linear);
m_texture_Rdata->setWrapMode(QOpenGLTexture::ClampToBorder);
}
else
{
m_texture_Rdata->setData(QOpenGLTexture::Red, QOpenGLTexture::UInt8, srcR);
}
// these are QOpenGLTextures
m_texture_Rdata->bind(0);
m_texture_Gdata->bind(1);
m_texture_Bdata->bind(2);
emit frameUploaded();
}
My QOpenGLWidget is displaying nothing unfortunately. I don't know how to proceed.
I know that the code for reading and sending the texture to the GPU is working, since if I leave out the line
bool is_current = m_context.makeCurrent(m_surface);
my whole window (not just the frame containing the QOpenGLWidget) is overwritten, displaying the texture.
I've been searching quite a bit, but I couldn't find any simple working example code for what I want to do. Hope someone has an idea what the issue might be. I've seen people mentioning using QOffscreenSurface or a second hidden widget it similar, but different contexts. Maybe I have to use one of those?
My fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 fragmentUV;
// Ouput data
out vec4 color_0;
// Values that stay constant for the whole mesh.
uniform sampler2D textureSamplerRed;
uniform sampler2D textureSamplerGreen;
uniform sampler2D textureSamplerBlue;
void main(){
vec3 myColor;
myColor.r = texture2D( textureSamplerRed, fragmentUV ).r;
myColor.g = texture2D( textureSamplerGreen, fragmentUV ).r;
myColor.b = texture2D( textureSamplerBlue, fragmentUV ).r;
color_0 = vec4(, 1.0f);
}

Draw transparent holes in a texture/plain color

I'm running into a problem and I don't know what is the best practise for it. I have a background that moves upward, which is in fact "slices" that moves toghether, as if the screen was splitted in 4-5 parts horizontally. I need to be able to draw a hole (circle) in the background (see-through), at a specified position which will change dynamically at each frame or so.
Here is how I generate a zone, I don't think there's much of a problem there:
// A 'zone' is simply the 'slice' of ground that moves upward. There's about 4 of
// them visible on screen at the same time, and they are automatically generated by
// a method irrelevant to the situation. Zones are Sprites.
// ---------
void LevelLayer::Zone::generate(LevelLayer *sender) {
// [...]
// Make a background for the zone
Sprite *background = this->generateBackgroundSprite();
background->setPosition(_contentSize.width / 2, _contentSize.height / 2);
this->addChild(background, 0);
}
This is the Zone::generateBackgroundSprite() method:
// generates dynamically a new background texture
Sprite *LevelLayer::Zone::generateBackgroundSprite() {
RenderTexture *rt = RenderTexture::create(_contentSize.width, _contentSize.height);
rt->retain();
Color4B dirtColorByte = Color4B(/*initialize the color with bytes*/);
Color4F dirtColor(dirtColorByte);
rt->beginWithClear(dirtColor.r, dirtColor.g, dirtColor.b, dirtColor.a);
// [Nothing here yet, gotta learn OpenGL m8]
rt->end();
// ++++++++++++++++++++
// I'm just testing clipping node, it works but the FPS get significantly lower.
// If I lock them to 60, they get down to 30, and if I lock them there they get
// to 20 :(
// Also for the test I'm drawing a square since ClippingNode doesn't seem to
// like circles...
DrawNode *square = DrawNode::create();
Point squarePoints[4] = { Point(-20, -20), Point(20, -20), Point(20, 20), Point(-20, 20) };
square->drawPolygon(squarePoints, 4, Color4F::BLACK, 0.0f, Color4F(0, 0, 0, 0));
square->setPosition(0, 0);
// Make a stencil
Node *stencil = Node::create();
stencil->addChild(square);
// Create a clipping node with the prepared stencil
ClippingNode *clippingNode = ClippingNode::create(stencil);
clippingNode->setInverted(true);
clippingNode->addChild(rt);
Sprite *ret = Sprite::create();
ret->addChild(clippingNode);
rt->release();
return ret;
}
**
So I'm asking you guys, what would you do in such a situation? Is what I am doing a good idea? Would you do it in another more imaginative way?
PS This is a rewrite of a little app I made for iOS (I want to port it to Android), and I was using MutableTextures in the Objective-C version (it was working). I'm just trying to see if there's a better way using RenderTexture, so I can dynamically create background images using OpenGL calls.
EDIT (SOLUTION)
I wrote my own simple fragment shader that "masks" the visible parts of a texture (the background) based on the visible parts of another texture (the mask). I have an array of points that determine where my circles are on the screen, and in the update method I draw them to a RenderTexture. I then take the generated texture and use it as the mask I pass to the shader.
This is my shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform sampler2D u_alphaMaskTexture;
void main() {
float maskAlpha = texture2D(u_alphaMaskTexture, v_texCoord).a;
float texAlpha = texture2D(u_texture, v_texCoord).a;
float blendAlpha = (1.0 - maskAlpha) * texAlpha; // Show only where mask is not visible
vec3 texColor = texture2D(u_texture, v_texCoord).rgb;
gl_FragColor = vec4(texColor, blendAlpha);
return;
}
init method:
bool HelloWorld::init() {
// [...]
Size visibleSize = Director::getInstance()->getVisibleSize();
// Load and cache the custom shader
this->loadCustomShader();
// 'generateBackgroundSlice()' creates a new RenderTexture and fills it with a
// color, nothing too complicated here so I won't copy-paste it in my edit
m_background = Sprite::createWithTexture(this->generateBackgroundSprite()->getSprite()->getTexture());
m_background->setPosition(visibleSize.width / 2, visibleSize.height / 2);
this->addChild(m_background);
m_background->setShaderProgram(ShaderCache::getInstance()->getProgram(Shader_AlphaMask_frag_key));
GLProgram *shader = m_background->getShaderProgram();
m_alphaMaskTextureUniformLocation = glGetUniformLocation(shader->getProgram(), "u_alphaMaskTexture");
glUniform1i(m_alphaMaskTextureUniformLocation, 1);
m_alphaMaskRender = RenderTexture::create(m_background->getContentSize().width,
m_background->getContentSize().height);
m_alphaMaskRender->retain();
// [...]
}
loadCustomShader method:
void HelloWorld::loadCustomShader() {
// Load the content of the vertex and fragement shader
FileUtils *fileUtils = FileUtils::getInstance();
string vertexSource = ccPositionTextureA8Color_vert;
string fragmentSource = fileUtils->getStringFromFile(
fileUtils->fullPathForFilename("Shader_AlphaMask_frag.fsh"));
// Init a shader and add its attributes
GLProgram *shader = new GLProgram;
shader->initWithByteArrays(vertexSource.c_str(), fragmentSource.c_str());
shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION);
shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORDS);
shader->link();
shader->updateUniforms();
ShaderCache::getInstance()->addProgram(shader, Shader_AlphaMask_frag_key);
// Trace OpenGL errors if any
CHECK_GL_ERROR_DEBUG();
}
update method:
void HelloWorld::update(float dt) {
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// Create the mask texture from the points in the m_circlePos array
GLProgram *shader = m_background->getShaderProgram();
m_alphaMaskRender->beginWithClear(0, 0, 0, 0); // Begin with transparent mask
for (vector<Point>::iterator it = m_circlePos.begin(); it != m_circlePos.end(); it++) {
// draw a circle on the mask
const float radius = 40;
const int resolution = 20;
Point circlePoints[resolution];
Point center = *it;
center = Director::getInstance()->convertToUI(center); // OpenGL has a weird coordinates system
float angle = 0;
for (int i = 0; i < resolution; i++) {
float x = (radius * cosf(angle)) + center.x;
float y = (radius * sinf(angle)) + center.y;
angle += (2 * M_PI) / resolution;
circlePoints[i] = Point(x, y);
}
DrawNode *circle = DrawNode::create();
circle->retain();
circle->drawPolygon(circlePoints, resolution, Color4F::BLACK, 0.0f, Color4F(0, 0, 0, 0));
circle->setPosition(Point::ZERO);
circle->visit();
circle->release();
}
m_alphaMaskRender->end();
Texture2D *alphaMaskTexture = m_alphaMaskRender->getSprite()->getTexture();
alphaMaskTexture->setAliasTexParameters(); // Disable linear interpolation
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
shader->use();
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, alphaMaskTexture->getName());
glActiveTexture(GL_TEXTURE0);
}
What you might want to look at is framebuffers, i'm not too familiar with the mobile API for OpenGL but I'm sure you should have access to framebuffers.
An idea of what you might want to try is to do a first pass where you render the circles's that you want to set to alpha on your background into a new framebuffer texture, then you can use this texture as an alpha map on your pass for rendering your background. So basically when you render your circle you might set the value in the texture to 0.0 for the alpha channel otherwise to 1.0, when rendering you can then set the alpha channel of the fragment to the same value as the alpha of texture of the first pass' of the rendering process.
You can think of it as a the same idea as a mask. But just using another texture.
Hope this helps :)

Modern equivalent of `gluOrtho2d `

What is the modern equivalent of the OpenGL function gluOrtho2d? clang is giving me deprecation warnings. I believe I need to write some kind of vertex shader? What should it look like?
I started off this answer thinking "It's not that different, you just have to...".
I started writing some code to prove myself right, and ended up not really doing so. Anyway, here are the fruits of my efforts: a minimal annotated example of "modern" OpenGL.
There's a good bit of code you'll need before modern OpenGL will start to act like old-school OpenGL. I'm not going to get into the reasons why you might like to do it the new way (or not) -- there are countless other answers that give a pretty good rundown. Instead I'll post some minimal code that can get you running if you're so inclined.
You should end up with this stunning piece of art:
Basic Render Process
Part 1: Vertex buffers
void TestDraw(){
// create a vertex buffer (This is a buffer in video memory)
GLuint my_vertex_buffer;
glGenBuffers(1 /*ask for one buffer*/, &my_vertex_buffer);
const float a_2d_triangle[] =
{
200.0f, 10.0f,
10.0f, 200.0f,
400.0f, 200.0f
};
// GL_ARRAY_BUFFER indicates we're using this for
// vertex data (as opposed to things like feedback, index, or texture data)
// so this call says use my_vertex_data as the vertex data source
// this will become relevant as we make draw calls later
glBindBuffer(GL_ARRAY_BUFFER, my_vertex_buffer);
// allocate some space for our buffer
glBufferData(GL_ARRAY_BUFFER, 4096, NULL, GL_DYNAMIC_DRAW);
// we've been a bit optimistic, asking for 4k of space even
// though there is only one triangle.
// the NULL source indicates that we don't have any data
// to fill the buffer quite yet.
// GL_DYNAMIC_DRAW indicates that we intend to change the buffer
// data from frame-to-frame.
// the idea is that we can place more than 3(!) vertices in the
// buffer later as part of normal drawing activity
// now we actually put the vertices into the buffer.
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(a_2d_triangle), a_2d_triangle);
Part 2: Vertex Array Object:
We need to define how the data contained in my_vertex_array is structured. This state is contained in a vertex array object (VAO). In modern OpenGL there needs to be at least one of these
GLuint my_vao;
glGenVertexArrays(1, &my_vao);
//lets use the VAO we created
glBindVertexArray(my_vao);
// now we need to tell the VAO how the vertices in my_vertex_buffer
// are structured
// our vertices are really simple: each one has 2 floats of position data
// they could have been more complicated (texture coordinates, color --
// whatever you want)
// enable the first attribute in our VAO
glEnableVertexAttribArray(0);
// describe what the data for this attribute is like
glVertexAttribPointer(0, // the index we just enabled
2, // the number of components (our two position floats)
GL_FLOAT, // the type of each component
false, // should the GL normalize this for us?
2 * sizeof(float), // number of bytes until the next component like this
(void*)0); // the offset into our vertex buffer where this element starts
Part 3: Shaders
OK, we have our source data all set up, now we can set up the shader which will transform it into pixels
// first create some ids
GLuint my_shader_program = glCreateProgram();
GLuint my_fragment_shader = glCreateShader(GL_FRAGMENT_SHADER);
GLuint my_vertex_shader = glCreateShader(GL_VERTEX_SHADER);
// we'll need to compile the vertex shader and fragment shader
// and then link them into a full "shader program"
// load one string from &my_fragment_source
// the NULL indicates that the string is null-terminated
const char* my_fragment_source = FragmentSourceFromSomewhere();
glShaderSource(my_fragment_shader, 1, &my_fragment_source, NULL);
// now compile it:
glCompileShader(my_fragment_shader);
// then check the result
GLint compiled_ok;
glGetShaderiv(my_fragment_shader, GL_COMPILE_STATUS, &compiled_ok);
if (!compiled_ok){ printf("Oh Noes, fragment shader didn't compile!\n"); }
else{
glAttachShader(my_shader_program, my_fragment_shader);
}
// and again for the vertex shader
const char* my_vertex_source = VertexSourceFromSomewhere();
glShaderSource(my_vertex_shader, 1, &my_vertex_source, NULL);
glCompileShader(my_vertex_shader);
glGetShaderiv(my_vertex_shader, GL_COMPILE_STATUS, &compiled_ok);
if (!compiled_ok){ printf("Oh Noes, vertex shader didn't compile!\n"); }
else{
glAttachShader(my_shader_program, my_vertex_shader);
}
//finally, link the program, and set it active
glLinkProgram(my_shader_program);
glUseProgram(my_shader_program);
Part 4: Drawing things on the screen
//get the screen size
float my_viewport[4];
glGetFloatv(GL_VIEWPORT, my_viewport);
//now create a projection matrix
float my_proj_matrix[16];
MyOrtho2D(my_proj_matrix, 0.0f, my_viewport[2], my_viewport[3], 0.0f);
//"uProjectionMatrix" refers directly to the variable of that name in
// shader source
GLuint my_projection_ref =
glGetUniformLocation(my_shader_program, "uProjectionMatrix");
// send our projection matrix to the shader
glUniformMatrix4fv(my_projection_ref, 1, GL_FALSE, my_proj_matrix );
//clear the background
glClearColor(0.3, 0.4, 0.4, 1.0);
glClear(GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT);
// *now* after that tiny setup, we're ready to draw the best 24 bytes of
// vertex data ever.
// draw the 3 vertices starting at index 0, interpreting them as triangles
glDrawArrays(GL_TRIANGLES, 0, 3);
// now just swap buffers however your window manager lets you
}
And That's it!
... except for the actual
Shaders
I started to get a little tired at this point, so the comments are a bit lacking. Let me know if you'd like anything clarified.
const char* VertexSourceFromSomewhere()
{
return
"#version 330\n"
"layout(location = 0) in vec2 inCoord;\n"
"uniform mat4 uProjectionMatrix;\n"
"void main()\n"
"{\n"
" gl_Position = uProjectionMatrix*(vec4(inCoord, 0, 1.0));\n"
"}\n";
}
const char* FragmentSourceFromSomewhere()
{
return
"#version 330 \n"
"out vec4 outFragColor;\n"
"vec4 DebugMagenta(){ return vec4(1.0, 0.0, 1.0, 1.0); }\n"
"void main() \n"
"{\n"
" outFragColor = DebugMagenta();\n"
"}\n";
}
The Actual Question you asked: Orthographic Projection
As noted, the actual math is just directly from Wikipedia.
void MyOrtho2D(float* mat, float left, float right, float bottom, float top)
{
// this is basically from
// http://en.wikipedia.org/wiki/Orthographic_projection_(geometry)
const float zNear = -1.0f;
const float zFar = 1.0f;
const float inv_z = 1.0f / (zFar - zNear);
const float inv_y = 1.0f / (top - bottom);
const float inv_x = 1.0f / (right - left);
//first column
*mat++ = (2.0f*inv_x);
*mat++ = (0.0f);
*mat++ = (0.0f);
*mat++ = (0.0f);
//second
*mat++ = (0.0f);
*mat++ = (2.0*inv_y);
*mat++ = (0.0f);
*mat++ = (0.0f);
//third
*mat++ = (0.0f);
*mat++ = (0.0f);
*mat++ = (-2.0f*inv_z);
*mat++ = (0.0f);
//fourth
*mat++ = (-(right + left)*inv_x);
*mat++ = (-(top + bottom)*inv_y);
*mat++ = (-(zFar + zNear)*inv_z);
*mat++ = (1.0f);
}
Modern OpenGL is significantly different. You won't be able to just drop in a new function. Read up...
http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-1:-The-Graphics-Pipeline.html
http://www.arcsynthesis.org/gltut/index.html
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/