Multiple IGraphicsContext with OpenTK / using multiple OpenGL contexts in a single window - opengl

What I want to do:
The main goal: Use SkiaSharp and OpenTK together. Render 2D and 3D.
What is the problem: SkiaSharp messes up the state of OpenGL, so I can't use it for 3D without saving and restoring some states.
Old solution (with OpenGL < 4): I used GL.PushClientAttrib(ClientAttribMask.ClientAllAttribBits); + some additional values (saved/restored them).
Now i read that this is not necessary the best solution and OpenGL 4 does not have GL.PushClientAttrib anymore. The usual way seems to be that you should use a seperate OpenGL context.
Have seen already: OpenTK multiple GLControl with a single Context
I am not using GLControl because I am not using WinForms. So this is not really helpful. What I tried:
internal class Program
{
public static void Main(string[] args)
{
new Program().Run();
}
private readonly GameWindow _gameWindow;
private IGraphicsContext _context2;
private GlObject _glObject;
private int _programId;
private GlObject _glObject2;
private int _programId2;
public Program()
{
_gameWindow = new GameWindow(800,600,
GraphicsMode.Default, "", GameWindowFlags.Default,
DisplayDevice.Default,
4, 2, GraphicsContextFlags.ForwardCompatible);
_gameWindow.Resize += OnResize;
_gameWindow.RenderFrame += OnRender;
_gameWindow.Load += OnLoad;
}
public void Run()
{
_gameWindow.Run();
}
private void OnLoad(object sender, EventArgs e)
{
_programId = ShaderFactory.CreateShaderProgram();
_glObject = new GlObject(new[]
{
new Vertex(new Vector4(-0.25f, 0.25f, 0.5f, 1f), Color4.Black),
new Vertex(new Vector4(0.0f, -0.25f, 0.5f, 1f), Color4.Black),
new Vertex(new Vector4(0.25f, 0.25f, 0.5f, 1f), Color4.Black),
});
_context2 = new GraphicsContext(GraphicsMode.Default, _gameWindow.WindowInfo, 4, 2,
GraphicsContextFlags.Default);
_context2.MakeCurrent(_gameWindow.WindowInfo);
_programId2 = ShaderFactory.CreateShaderProgram();
_glObject2 = new GlObject(new[]
{
new Vertex(new Vector4(-0.25f, 0.25f, 0.5f, 1f), Color4.Yellow),
new Vertex(new Vector4(0.0f, -0.25f, 0.5f, 1f), Color4.Yellow),
new Vertex(new Vector4(0.25f, 0.25f, 0.5f, 1f), Color4.Yellow),
});
_gameWindow.MakeCurrent();
}
private void OnRender(object sender, FrameEventArgs e)
{
_gameWindow.Context.MakeCurrent(_gameWindow.WindowInfo);
GL.Viewport(0, 0, _gameWindow.Width, _gameWindow.Height);
GL.ClearColor(0.3f,0.1f,0.1f,1);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(_programId);
_glObject.Render();
GL.Flush();
_gameWindow.SwapBuffers();
// i tried different combinations here
// as i read GL.Clear will always clear the whole window
_context2.MakeCurrent(_gameWindow.WindowInfo);
GL.Viewport(10,10,100,100);
//GL.ClearColor(0f, 0.8f, 0.1f, 1);
//GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(_programId2);
_glObject2.Render();
GL.Flush();
_context2.SwapBuffers();
}
private void OnResize(object sender, EventArgs e)
{
var clientRect = _gameWindow.ClientRectangle;
GL.Viewport(0, 0, clientRect.Width, clientRect.Height);
}
}
Vertex shader:
#version 450 core
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
out vec4 vs_color;
void main(void)
{
gl_Position = position;
vs_color = color;
}
Fragment shader:
#version 450 core
in vec4 vs_color;
out vec4 color;
void main(void)
{
color = vs_color;
}
Works fine with a single context, when I use both contexts what happens is: first context gets rendered but flickers. There is no second triangle visible at all (as i understand GL.Viewport it should be visible on the lower left corner of the screen).
You could help me by answering one or more of the following questions:
Is there another way to restore the original context
Is there another way to render with HW acceleration on a part of the screen, ideally having specified OpenGL states for the specific area
How can I get the solution above getting to work the way I want (no flicker but a smaller screen inside a smaller portion of the window)

After trying some more combinations what did the trick was:
Call SwapBuffers only on the last used context in render (even when you use 3 contexts). Then no flicker will occur and rendering seems to work fine. State seems to be independent from each other.

Related

Unable to get Tessallation shader working

I've just started following OpenGL SuperBible 7th ed, and translating the examples into LWJGL, but have become stuck on the tessellation shader. In the following program there is the line " //IF THESE TWO LINES..." if the following two lines are commented out then the vertex and fragment shaders work but when the control.tess.glsl and eval.tess.glsl are included then the triangle no longer renders.
I've uploaded my program onto github but will reproduce the code here as well:
package com.ch3vertpipeline;
public class App {
public static void main(String [] args){
LwjglSetup setup = new LwjglSetup();
setup.run();
}
}
package com.ch3vertpipeline;
import java.nio.IntBuffer;
import java.util.Scanner;
import org.lwjgl.*;
import org.lwjgl.glfw.*;
import org.lwjgl.opengl.*;
import org.lwjgl.system.*;
import static org.lwjgl.glfw.Callbacks.*;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL20.*;
import static org.lwjgl.opengl.GL30.*;
import static org.lwjgl.system.MemoryStack.stackPush;
import static org.lwjgl.system.MemoryUtil.NULL;
public class LwjglSetup {
private long window;
private int vertex_shader;
private int fragment_shader;
private int tess_control_shader;
private int tess_evaluation_shader;
private int program;
private int vertex_array_object;
public LwjglSetup() {
}
private void init() {
GLFWErrorCallback.createPrint(System.err).set();
if (!glfwInit()) {
throw new IllegalStateException("Unable to initialize GLFW");
}
// Configure GLFW
glfwDefaultWindowHints(); // optional, the current window hints are already the default
glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE); // the window will stay hidden after creation
glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE); // the window will be resizable
// Create the window
window = glfwCreateWindow(300, 300, "Hello World!", NULL, NULL);
if (window == NULL) {
throw new RuntimeException("Failed to create the GLFW window");
}
// Setup a key callback. It will be called every time a key is pressed, repeated or released.
glfwSetKeyCallback(window, (window, key, scancode, action, mods) -> {
if (key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE) {
glfwSetWindowShouldClose(window, true); // We will detect this in the rendering loop
}
});
// Get the thread stack and push a new frame
try (MemoryStack stack = stackPush()) {
IntBuffer pWidth = stack.mallocInt(1); // int*
IntBuffer pHeight = stack.mallocInt(1); // int*
// Get the window size passed to glfwCreateWindow
glfwGetWindowSize(window, pWidth, pHeight);
// Get the resolution of the primary monitor
GLFWVidMode vidmode = glfwGetVideoMode(glfwGetPrimaryMonitor());
// Center the window
glfwSetWindowPos(
window,
(vidmode.width() - pWidth.get(0)) / 2,
(vidmode.height() - pHeight.get(0)) / 2
);
} // the stack frame is popped automatically
// Make the OpenGL context current
glfwMakeContextCurrent(window);
// Enable v-sync
glfwSwapInterval(1);
// Make the window visible
glfwShowWindow(window);
}
public void run() {
System.out.println("Hello LWJGL " + Version.getVersion() + "!");
init();
loop();
// Free the window callbacks and destroy the window
glfwFreeCallbacks(window);
glfwDestroyWindow(window);
// Terminate GLFW and free the error callback
glfwTerminate();
glfwSetErrorCallback(null).free();
}
private void loop() {
GL.createCapabilities();//Critical
System.out.println("OpenGL Verion: " + glGetString(GL_VERSION));
this.compileShader();
vertex_array_object = glGenVertexArrays();
glBindVertexArray(vertex_array_object);
while (!glfwWindowShouldClose(window)) {
double curTime = System.currentTimeMillis() / 1000.0;
double slowerTime = curTime;//assigned direcly but I was applying a factor here
final float colour[] = {
(float) Math.sin(slowerTime) * 0.5f + 0.5f,
(float) Math.cos(slowerTime) * 0.5f + 0.5f,
0.0f, 1.0f};
glClearBufferfv(GL_COLOR, 0, colour);
glUseProgram(program);
final float attrib[] = {
(float) Math.sin(slowerTime) * 0.5f,
(float) Math.cos(slowerTime) * 0.6f,
0.0f, 0.0f};
//glPatchParameteri(GL_PATCH_VERTICES, 3);//this is the default so is unneeded
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glVertexAttrib4fv(0, attrib);
glDrawArrays(GL_TRIANGLES, 0, 3);
glfwSwapBuffers(window); // swap the color buffers
glfwPollEvents();
}
glDeleteVertexArrays(vertex_array_object);
glDeleteProgram(program);
}
private String readFileAsString(String filename) {
String next = new Scanner(LwjglSetup.class.getResourceAsStream(filename), "UTF-8").useDelimiter("\\A").next();
System.out.println("readFileAsString: " + next);
return next;
}
private void compileShader() {
//int program;
//NEW CODE
//create and compile vertex shader
String vertShaderSource = readFileAsString("/vert.glsl");
vertex_shader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertex_shader, vertShaderSource);
glCompileShader(vertex_shader);
//check compilation
if (glGetShaderi(vertex_shader, GL_COMPILE_STATUS) != 1) {
System.err.println(glGetShaderInfoLog(vertex_shader));
System.exit(1);
}
//create and compile fragment shader
String fragShaderSource = readFileAsString("/frag.glsl");
fragment_shader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragment_shader, fragShaderSource);
glCompileShader(fragment_shader);
//check compilation
if (glGetShaderi(fragment_shader, GL_COMPILE_STATUS) != 1) {
System.err.println(glGetShaderInfoLog(fragment_shader));
System.exit(1);
}
//create and compile tessellation shader
String tessControlShaderSource = readFileAsString("/control.tess.glsl");
tess_control_shader = glCreateShader(GL40.GL_TESS_CONTROL_SHADER);
glShaderSource(tess_control_shader, tessControlShaderSource);
glCompileShader(tess_control_shader);
//check compilation
if (glGetShaderi(tess_control_shader, GL_COMPILE_STATUS) != 1) {
System.err.println(glGetShaderInfoLog(tess_control_shader));
System.exit(1);
}
//create and compile tessellation shader
String tessEvaluationShaderSource = readFileAsString("/eval.tess.glsl");
tess_evaluation_shader = glCreateShader(GL40.GL_TESS_EVALUATION_SHADER);
glShaderSource(tess_evaluation_shader, tessEvaluationShaderSource);
glCompileShader(tess_evaluation_shader);
//check compilation
if (glGetShaderi(tess_evaluation_shader, GL_COMPILE_STATUS) != 1) {
System.err.println(glGetShaderInfoLog(tess_evaluation_shader));
System.exit(1);
}
//create program and attach it
program = glCreateProgram();
glAttachShader(program, vertex_shader);
glAttachShader(program, fragment_shader);
//IF THESE TWO LINES ARE COMMENTED PROGRAM WORKS...although there
//is no tessallation...
glAttachShader(program, tess_control_shader);
glAttachShader(program, tess_evaluation_shader);
glLinkProgram(program);
//check link
if (glGetProgrami(program, GL_LINK_STATUS) != 1) {
System.err.println(glGetProgramInfoLog(program));
System.exit(1);
}
glValidateProgram(program);
if (glGetProgrami(program, GL_VALIDATE_STATUS) != 1) {
System.err.println(glGetProgramInfoLog(program));
System.exit(1);
}
//delete shaders as the program has them now
glDeleteShader(vertex_shader);
glDeleteShader(fragment_shader);
glDeleteShader(tess_control_shader);
glDeleteShader(tess_evaluation_shader);
//return program;
}
}
vert.glsl
#version 440 core
//'offset' is an input vertex attribute
layout (location=0) in vec4 offset;
layout (location=1) in vec4 color;
out vec4 vs_color;
void main(void)
{
const vec4 vertices[3] = vec4[3]( vec4( 0.25, -0.25, 0.5, 1.0),
vec4(-0.25, -0.25, 0.5, 1.0),
vec4( 0.25, 0.25, 0.5, 1.0));
//Add 'offset' to hour hard-coded vertex position
gl_Position = vertices[gl_VertexID] + offset;
//Output a fixed value for vs_color
vs_color = color;
}
frag.glsl
#version 440 core
in vec4 vs_color;
out vec4 color;
void main(void)
{
color = vs_color;
}
control.tess.glsl
#version 440 core
layout (vertices=3) out;
void main(void)
{
//Only if I am invocation 0
if (gl_InvocationID == 0){
gl_TessLevelInner[0] = 5.0;
gl_TessLevelOuter[0] = 5.0;
gl_TessLevelOuter[1] = 5.0;
gl_TessLevelOuter[2] = 5.0;
}
//Everybody copies their input to their output?
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
}
eval.tess.glsl
#version 440 core
layout (triangles, equal_spacing, cw) in;
void main(void){
gl_Position = (gl_TessCoord.x * gl_in[0].gl_Position) +
(gl_TessCoord.y * gl_in[1].gl_Position) +
(gl_TessCoord.z * gl_in[2].gl_Position);
}
Finally, if it helps here is some version information, which is printed at the start of the application:
Hello LWJGL 3.1.5 build 1!
OpenGL Verion: 4.4.0 NVIDIA 340.107
glDrawArrays(GL_TRIANGLES, 0, 3);
When you draw something with tessellation, you are drawing patches, not triangles. Hence, you have to specify GL_PATCHES:
glDrawArrays(GL_PATCHES, 0, 3);
//Everybody copies their input to their output?
The reason is that the input vertices and output vertices of the tessellation control shader are not related to each other. The input vertices are taken from the input stream, i.e. your vertex buffers (after being processed by the vertex shader). Their number is specified by the GL_PATCH_VERTICES parameter. Each invocation takes this number of vertices from the buffer. The output vertices are kept internally in the pipeline. Their number is specified by the layout directive. This number can be different from the number of input vertices. They can also have different attributes. I find it more intuitive to think of these vertices as pieces of data instead of actual vertices with a geometric meaning. In some cases, this interpretation might make sense, but definitely not in all.

Depth testing is not working at all

For some reason in my project my depth testing is not working. I have made sure it is enabled and it doesn't work. I know this because I can see certain faces being drawn over each other and different objects (cubes) in the scene are drawn over each other.
I am using the default framebuffer, so there should be depth. I also check gl_FragCoord.z and it returned the correct depth. I've went through my code thoroughly for ages, searched dozens of google pages and I still can't find the answer.
Here is the code presented in order of execution that is relevant to this question:
Init()
void Program::Init()
{
std::cout << "Initialising" << std::endl;
glViewport(0, 0, Options::width, Options::height);
glewExperimental = true; // Needed in core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return;
}
window.setKeyRepeatEnabled(false);
sf::Mouse::setPosition(sf::Vector2i(Options::width / 2, Options::height / 2), window);
LoadGameState(GameState::INGAME, false);
Run();
}
GLInit()
void Program::GLInit() {
lampShader = Shader("StandardVertex.shader", "LightingFragmentShader.shader");
ourShader = Shader("VertexShader.shader", "SimpleFragmentShader.shader");
screenShader = Shader("FrameVertexShader.shader", "FrameFragmentShader.shader");
skyboxShader = Shader("SkyboxVertex.shader", "SkyboxFragment.shader");
cubeDepthShader = Shader("CubeDepthVertex.shader", "CubeDepthFragment.shader", &std::string("CubeDepthGeometry.shader"));
debugDepthQuad = Shader("SimpleVertex.shader", "DepthFragment.shader");
blurShader = Shader("SimpleVertex.shader", "BloomFragment.shader");
bloomShader = Shader("SimpleVertex.shader", "FinalBloom.shader");
depthShader = Shader("VertexDepth.shader", "EmptyFragment.shader");
geometryPass = Shader("GeometryVertex.shader", "GeometryFragment.shader");
lightingPass = Shader("SimpleVertex.shader", "LightingFragment.shader");
shaderSSAO = Shader("SimpleVertex.shader", "SSAOFragment.shader");
shaderSSAOblur = Shader("SimpleVertex.shader", "SSAOBlurFragment.shader");
colliders = Shader("VertexShader.shader", "GreenFragment.shader");
glEnable(GL_DEPTH_TEST);
}
and another function that runs on repeat:
RunGame()
void Program::RunGame()
{
scene->mainCamera.Input(Options::deltaTime, window);
if(timeSinceGameStart.getElapsedTime().asSeconds()<2)
DoLights();
if(timeSinceGameStart.getElapsedTime().asSeconds() > 5.0f)
scene->DoPhysics();
scene->CheckCollisions();
glClearColor(0.32f, 0.5f, 0.58f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
projection = glm::mat4();
view = glm::mat4();
// view/projection transformations
projection = glm::perspective(glm::radians(Options::fov), (float)Options::width / (float)Options::height, 0.1f, 100.0f);
view = scene->mainCamera.GetViewMatrix();
lampShader.use();
lampShader.setMat4("projection", projection);
lampShader.setMat4("view", view);
lampShader.setInt("setting", Options::settings);
glm::mat4 model;
for (unsigned int i = 0; i < lights.size(); i++)
{
model = glm::mat4();
model = glm::translate(model, lights[i].position);
model = glm::scale(model, glm::vec3(0.125f));
lampShader.setMat4("model", model);
lampShader.setVec3("lightColor", lights[i].colour);
Scene::renderCube();
}
if (Options::showColliders) {
colliders.use();
colliders.setMat4("projection", projection);
colliders.setMat4("view", view);
scene->RenderColliders(colliders);
}
}
I have looked at lots of pages and I have done all the recommendations:
call glEnable(GL_DEPTH_TEST)
clear the depth buffer
make sure zNear is not a tiny number
not call functions that would disable depth like glDepthMask(GL_FALSE)
Any help if welcome. I hope this is enough code, if its not I'll give any requested code. Most of the functions not shown are pretty self explanatory. Also if you think all the code is fine above, please tell me in the comments so I know that the issue is not there.
I am obviously also using c++ and glew. I am also using SFML for my window.
Thanks for any answers
EDIT:
Code responsible for creating sfml window:
Program::Program()
:settings(24, 8, 4, 3, 0),
window(sf::VideoMode(Options::width, Options::height), "OpenGL", sf::Style::Default, settings)
{
Init();
}
Vertex shader for lights:
#version 330 core
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aNormal;
layout(location = 2) in vec2 aTexCoords;
out vec3 Normal;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main()
{
Normal = transpose(inverse(mat3(model))) * aNormal;
gl_Position = projection * view * model * vec4(aPos, 1.0);
}
Declaration of window and settings
sf::RenderWindow window;
sf::ContextSettings settings;
The problem is that in C++, members of a class are initialised in the order of their declaration, not in the order they are listed in any constructor's mem-initialiser list.
If you turn on enough warnings in your compiler, it should warn you that in the Program constructor you've shown, settings will be initialised after window. Which is precisely your problem, as settings is used before it gets the values you specified for it. Swap the order of the members in the class to resolve it.
The reason for the rule is that a fundamental C++ rule is "objects of automatic storage duration are always destroyed in the exact opposite order of their construction." A class's destructor therefore needs one order in which to destroy the members, regardless of which constructor was used to create the object. The order is therefore fixed to be that of declaration. Note that order of declaration is also used to control creation/destruction order in other contexts (such as function-local variables), so using it here is consistent.

Asynchronous texture upload with Qt and OpenGL

I'm writing a small video player using QOpenGLWidget. At the moment I'm struggling to get asynchronous texture upload working. In an earlier version of my code I wait for a "next frame" signal, upon which the frame is read from the hard drive, uploaded to the GPU and then rendered. Now I want to get this working asynchronously using a ring buffer on the GPU. I want a separate thread to upload the next N textures and the main thread to take one of this textures, display and invalidate it. As a first step I wrote a class to upload a single texture which I want to use from my QOpenGLWidget. I created shared contexts between my class and the QOpenGLWidget.
class GLWidget : public QOpenGLWidget, protected QOpenGLFunctions;
void GLWidget::paintGL() {
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_program->bind();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW);
m_program->setUniformValue("textureSamplerRed", 0);
m_program->setUniformValue("textureSamplerGreen", 1);
m_program->setUniformValue("textureSamplerBlue", 2);
glUniformMatrix4fv(m_matMVP_Loc, 1, GL_FALSE, &m_MVP[0][0]);
m_vertice_indices_Vbo.bind();
m_vertices_Vbo.bind();
m_texture_coordinates_Vbo.bind();
glDrawElements(
GL_TRIANGLE_STRIP, // mode
m_videoFrameTriangles_indices.size(), // count
GL_UNSIGNED_INT, // type
(void*)0 // element array buffer offset
);
m_program->release();
}
I wait for the GLWidget::initializeGL() to finish, emit a signal which is connected to the initialization of my texture loading class:
class TextureLoader2 : public QObject, protected QOpenGLFunctions;
void TextureLoader2::initialize(QOpenGLContext *context)
{
// sharing the OpenGL context with GLWidget
m_context.setFormat(context->format()); // need this?
m_context.setShareContext(context);
m_context.create();
m_context.makeCurrent(context->surface());
m_surface = context->surface();
}
And here is how I load a new frame:
void TextureLoader2::loadNextFrame(const int frameIdx)
{
QElapsedTimer timer;
timer.start();
bool is_current = m_context.makeCurrent(m_surface);
// some code which reads the frame from disk and sends to the GPU.
// srcR is a pointer to the the data for red. the upload for G and B is similar
if(!m_texture_Rdata)
{
m_texture_Rdata = std::make_shared<QOpenGLTexture>(QOpenGLTexture::Target2D);
m_texture_Rdata->create();
m_texture_Rdata->setSize(m_frameWidth,m_frameHeight);
m_texture_Rdata->setFormat(QOpenGLTexture::R8_UNorm);
m_texture_Rdata->allocateStorage(QOpenGLTexture::Red,QOpenGLTexture::UInt8);
m_texture_Rdata->setData(QOpenGLTexture::Red, QOpenGLTexture::UInt8, srcR);
// Set filtering modes for texture minification and magnification
m_texture_Rdata->setMinificationFilter(QOpenGLTexture::Nearest);
m_texture_Rdata->setMagnificationFilter(QOpenGLTexture::Linear);
m_texture_Rdata->setWrapMode(QOpenGLTexture::ClampToBorder);
}
else
{
m_texture_Rdata->setData(QOpenGLTexture::Red, QOpenGLTexture::UInt8, srcR);
}
// these are QOpenGLTextures
m_texture_Rdata->bind(0);
m_texture_Gdata->bind(1);
m_texture_Bdata->bind(2);
emit frameUploaded();
}
My QOpenGLWidget is displaying nothing unfortunately. I don't know how to proceed.
I know that the code for reading and sending the texture to the GPU is working, since if I leave out the line
bool is_current = m_context.makeCurrent(m_surface);
my whole window (not just the frame containing the QOpenGLWidget) is overwritten, displaying the texture.
I've been searching quite a bit, but I couldn't find any simple working example code for what I want to do. Hope someone has an idea what the issue might be. I've seen people mentioning using QOffscreenSurface or a second hidden widget it similar, but different contexts. Maybe I have to use one of those?
My fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 fragmentUV;
// Ouput data
out vec4 color_0;
// Values that stay constant for the whole mesh.
uniform sampler2D textureSamplerRed;
uniform sampler2D textureSamplerGreen;
uniform sampler2D textureSamplerBlue;
void main(){
vec3 myColor;
myColor.r = texture2D( textureSamplerRed, fragmentUV ).r;
myColor.g = texture2D( textureSamplerGreen, fragmentUV ).r;
myColor.b = texture2D( textureSamplerBlue, fragmentUV ).r;
color_0 = vec4(, 1.0f);
}

Libgdx 3D - Point Light shows black box & rect (PointLight not working)

I am creating a 3d scene currently a box and rect, and trying to enable lighting.
When i create a PointLight and add it to Environment everything turns to black color?
all i want to do is create a 3d scene and enable point light, like a sun or rays coming from a point and shading the objects.
Code:
environment = new Environment();
environment.add(new PointLight().set(1f, 1f, 1f, 0, 0, 20f, 100f));
modelBatch=new ModelBatch();
..
square=new ModelBuilder().createBox(300,300,300,new Material(ColorAttribute.createDiffuse(Color.GREEN)),
VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal);
squareinst=new ModelInstance(square);
squareinst.transform.setTranslation(-500,0,0);
--
sprites.get(0).setRotationY(sprites.get(0).getRotationY() + 1f);
sprites.get(1).setRotationY(sprites.get(1).getRotationY() - 1f);
squareinst.transform.rotate(1,0,0,1);
modelBatch.begin(camera);
for(Sprite3D sp:sprites)// has 3d rect models
sp.draw(modelBatch,environment);
modelBatch.render(squareinst,environment);
modelBatch.end();
PointLight turning everything black
Without using environment or lights
as per my investigation, here if pointlight is not working then everything should be black as currently, because the environment needs light, it works fine with Directional light (only the backface of rect is black even after rotations, i don't know why)
libgdx version 1.6.1 - android studio
i checked it on both android device and desktop
please i really need to get this PointLight working, i don't know if it will take a custom shader, if so please guide me to some links because i am not experienced in shaders. I also read about PointLight not working on some device or not working in opengl 2.0 enabled, but i am not sure.
I tried a lot of thing and values. I know about Ambient Light but that has no use to my case. Directional light also has limited usage (can be used as a fallback if this doesn't work).
Edit:
Its working now, check the answer below:
if you are using big camera size or big model size, please try adding more zeros to the pointlight intensity until the light is visible.
Here is a very simple example that shows a point light being rotated around a sphere:
public class PointLightTest extends ApplicationAdapter {
ModelBatch modelBatch;
Environment environment;
PerspectiveCamera camera;
CameraInputController camController;
PointLight pointLight;
Model model;
ModelInstance instance;
#Override
public void create () {
modelBatch = new ModelBatch();
camera = new PerspectiveCamera();
camera.position.set(5f, 5f, 5f);
camera.lookAt(0f, 0f, 0f);
camController = new CameraInputController(camera);
environment = new Environment();
environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1.0f));
environment.add(pointLight = new PointLight().set(0.8f, 0.8f, 0.8f, 2f, 0f, 0f, 10f));
ModelBuilder mb = new ModelBuilder();
model = mb.createSphere(1f, 1f, 1f, 20, 10, new Material(ColorAttribute.createDiffuse(Color.GREEN)), Usage.Position | Usage.Normal);
instance = new ModelInstance(model);
Gdx.input.setInputProcessor(camController);
}
#Override
public void resize (int width, int height) {
camera.viewportWidth = width;
camera.viewportHeight = height;
camera.update();
}
#Override
public void render () {
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
camController.update();
pointLight.position.rotate(Vector3.Z, Gdx.graphics.getDeltaTime() * 90f);
modelBatch.begin(camera);
modelBatch.render(instance, environment);
modelBatch.end();
}
#Override
public void dispose () {
model.dispose();
modelBatch.dispose();
}
}
Note that the light needs to be outside the model and within the range for it to light the model. Try what happens when you gradually move the light away from the model or towards the model. The Renderable in that other example was used to visualize this location of the light.

Best place to store model matrix in OpenGL?

I'm currently refactoring my OpenGL program (used to be one single enormous file) to use C++ classes. The basic framework looks like this:
I have an interface Drawable with the function virtual void Render(GLenum type) const = 0; and a bunch of classes implementing this interface (Sphere, Cube, Grid, Plane, PLYMesh and OBJMesh).
In my main.cpp I'm setting up a scene containing multiple of these objects, each with its own shader program. After setting uniform buffer objects and each program's individual uniforms, I'm calling glutMainLoop().
In my Display function called each frame, the first thing I'm doing is setting up all the transformation matrices and finally call the above mentioned Render function for every object in the scene:
void Display()
{
// Clear framebuffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
modelViewMatrix = glm::mat4(1.0);
projectionMatrix = glm::mat4(1.0);
normalMatrix = glm::mat4(1.0);
modelViewMatrix = glm::lookAt(glm::vec3(0.0, 0.0, mouse_translate_z), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0));
modelViewMatrix = glm::rotate(modelViewMatrix, -mouse_rotate_x, glm::vec3(1.0f, 0.0f, 0.0f));
modelViewMatrix = glm::rotate(modelViewMatrix, -mouse_rotate_y, glm::vec3(0.0f, 1.0f, 0.0f));
projectionMatrix = glm::perspective(45.0f, (GLfloat)WINDOW_WIDTH / (GLfloat)WINDOW_HEIGHT, 1.0f, 10000.f);
// No non-uniform scaling (only use mat3(normalMatrix in shader))
normalMatrix = modelViewMatrix;
glBindBuffer(GL_UNIFORM_BUFFER, ubo_global_matrices);
glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(glm::mat4), glm::value_ptr(modelViewMatrix));
glBufferSubData(GL_UNIFORM_BUFFER, 1 * sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(projectionMatrix));
glBufferSubData(GL_UNIFORM_BUFFER, 2 * sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(normalMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
// ************************************************** //
// **************** DRAWING COMMANDS **************** //
// ************************************************** //
// Grid
if (grid->GetIsRendered())
{
program_GRID_NxN->Use();
grid->Render(GL_LINES);
program_GRID_NxN->UnUse();
}
// Plane
...
// Sphere
...
// Swap front and back buffer and redraw scene
glutSwapBuffers();
glutPostRedisplay();
}
My question now is the following: With the current code, I'm using the same ModelView matrix for every object. What if I wanna translate only the sphere, or rotate only the plane without changing the vertex positions? Where is the best place to store the model matrix in a large OpenGL program? What about putting a protected member variable glam::mat4 modelMatrix into the Drawable interface? Also, should the model and the view matrix be split (for example using a Camera class containing the view matrix only)?
My answer is mainly based off Tom Dalling's excellent tutorial, but with some minor changes.
Firstly all your view and projection matrix operations should go in the Camera class. Camera will provide a convenient way of getting the view and projection matrix by calling the matrix() method.
glm::mat4 Camera::matrix() const {
return projection() * view();
}
Camera.cpp
Then for this example you'd have an Model Asset, which contains everything you need to render the geometry. This asset should be unique and stored in a ResourceManager or something similar.
struct ModelAsset {
Shader* shader;
Texture* texture;
GLuint vbo;
GLuint vao;
GLenum drawType;
GLint drawStart;
GLint drawCount;
};
Then you have an Model Instance, which has a pointer to the assest plus a unique transform matrix. This way you can create as many instances of a particular asset each one with its own unique transformation.
struct ModelInstance {
ModelAsset* asset;
glm::mat4 transform;
};
ModelInstance cube;
cube.asset = &asset; // An asset that you created somewhere else (e.g. ResourceManager)
cube.transform = glm::mat4(); // Your unique transformation for this instance
To render an instance you pass the view and model matrix as uniforms to the shader, and shader does the rest of the work.
shaders->setUniform("camera", camera.matrix());
shaders->setUniform("model", cube.transform);
Finally it's best when all your instances are grouped nicely in some resizable container.
std::vector<ModelInstance> instances;
instances.push_back(cube);
instances.push_back(sphere);
instances.push_back(pyramid);
for (ModelInstance i : instances) {
i.transform = glm::rotate(i.transform, getTime(), glm::vec3(0.0f, 1.0f, 0.0f));
}