Why this projection gives a wrong result? - opengl

I try to obtain a so called world space coordinates for 2D rendering by creating a orthogonal projection matrix. But the result is disappointing I expect a blue triangle but there is only some blue pixels in the bottom right.
Here is my code in Nim language. It's a very simplified version who reproduces the issue.
The important part is in the function "projection2D", or maybe in the vertex shader. I don't think the problem is elsewhere but for safety I post the complete example
type
OGLfloat = float32
OGLuint = uint32
OGLint = int32
type Mat4x4* = array[16, OGLfloat] # 4 x 4 Matrix
# Here OpenGL constants should be here but not pasted to save space.
#....
const
POSITION_LENGTH = 3.OGLint
COLOR_LENGTH = 4.OGLint
const
WINDOW_W = 640
WINDOW_H = 480
let
colorDataOffset = POSITION_LENGTH * OGLint(sizeof(OGLfloat))
# OpenGL function import should be here.
#...
var
# Thanks to an orthogonal projection matrix I expect to use coordinates in pixels.
vertices = #[OGLfloat(420), 0, 0, # Position
0, 0, 1, 1, # Color
640, 480, 0,
0, 0, 1, 1,
0, 480, 0,
0, 0, 1, 1]
indices = #[OGLuint(0), 1 , 2]
proc projection2D(left, right, bottom, top, far, near:float):Mat4x4 =
result = [OGLfloat(1), 0, 0, 0, # Start from an identity matrix.
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1]
# Orthographic projection, inspired from a Wikipedia article example.
result[0] = OGLfloat(2.0 / (right - left))
result[5] = OGLfloat(2.0 / (top - bottom))
result[3] = OGLfloat(- ((right + left) / (right - left)))
result[7] = OGLfloat(- ((top + bottom) / (top - bottom)))
result[10] = OGLfloat(-2 / (far - near))
result[11] = OGLfloat(-((far + near)/(far - near)))
# These parameters comes from "learnopengl.com".
var projectionMatrix = projection2D(0.0, OGLfloat(WINDOW_W), OGLfloat(WINDOW_H), 0.0, - 1.0, 1.0)
var glfwErr = glfwInit()
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3)
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3)
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE)
var winHandle = glfwCreateWindow(WINDOW_W, WINDOW_H)
glfwMakeContextCurrent(winHandle)
var glewErr = glewInit()
var
shadID:OGLuint
vertSrc:cstring = """
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec4 aColor;
out vec4 vColor;
uniform mat4 projection;
void main()
{
gl_Position = projection * vec4(aPos, 1.0f);
vColor = aColor;
}
"""
fragSrc:cstring = """
#version 330 core
out vec4 FragColor;
in vec4 vColor;
void main()
{
FragColor = vColor;
}
"""
proc send_src(vert:var cstring, frag:var cstring):OGLuint =
var success:OGLint
# vertex
var vertexShader = glCreateShader(GL_VERTEX_SHADER)
glShaderSource(vertexShader, 1, addr vert, nil)
glCompileShader(vertexShader)
# Check compilation errors.
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, addr success)
if bool(success) == false:
echo(" vertex shader compilation failed (send_src)")
else:
echo("vertexShader compiled (send_src)")
# fragment
var fragmentShader = glCreateShader(GL_FRAGMENT_SHADER)
glShaderSource(fragmentShader, 1, addr frag, nil)
glCompileShader(fragmentShader)
# Check compilation errors.
glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, addr success)
if bool(success) == false:
echo("fragment shader compilation failed (send_src)")
else:
echo("fragmentShader compiled (send_src)")
# Shader program
result = glCreateProgram()
glAttachShader(result, vertexShader)
glAttachShader(result, fragmentShader)
glLinkProgram(result)
# Check for linkage errors.
glGetProgramiv(result, GL_LINK_STATUS, addr success)
if success == 0:
echo("program linking failed (send_src)")
else:
echo("shader linked (send_src)")
glDeleteShader(vertexShader)
glDeleteShader(fragmentShader)
glViewport(0, 0, WINDOW_W, WINDOW_H)
shadID = send_src(vertSrc, fragSrc)
var VAO, VBO, EBO:OGLuint
glGenVertexArrays(1, addr VAO)
glGenBuffers(1, addr VBO)
glGenBuffers(1, addr EBO)
glBindVertexArray(VAO)
glBindBuffer(GL_ARRAY_BUFFER, VBO)
glBufferData(GL_ARRAY_BUFFER, vertices.len * sizeof(OGLfloat),
addr vertices[0], GL_STATIC_DRAW)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.len * sizeof(OGLuint),
addr indices[0], GL_STATIC_DRAW)
# Position layout
glVertexAttribPointer(0, POSITION_LENGTH, GL_FLOAT, GL_FALSE, (POSITION_LENGTH + COLOR_LENGTH) * OGLint(sizeof(OGLfloat)),
nil)
glEnableVertexAttribArray(0)
# Color layout
glVertexAttribPointer(1, COLOR_LENGTH, GL_FLOAT, GL_FALSE, (POSITION_LENGTH + COLOR_LENGTH) * OGLint(sizeof(OGLfloat)),
cast[pointer](colorDataOffset))
glEnableVertexAttribArray(1)
glBindBuffer(GL_ARRAY_BUFFER, 0)
glBindVertexArray(0)
glUseProgram(shadID)
while bool(glfwWindowShouldClose(winHandle)) == false:
glClearColor(0.2, 0.3, 0.3, 1.0)
glClear(GL_COLOR_BUFFER_BIT)
glBindVertexArray(VAO)
glUniformMatrix4fv(glGetUniformLocation(shadID, "projection"), 1, GL_FALSE, addr projectionMatrix[0])
glDrawElements(GL_TRIANGLES, OGLint(indices.len), GL_UNSIGNED_INT, nil)
glfwSwapBuffers(winHandle)
glfwPollEvents()
glDeleteVertexArrays(1, addr VAO)
glDeleteBuffers(1, addr VBO)
glDeleteBuffers(1, addr EBO)
glfwDestroyWindow(winHandle)
glfwTerminate()
Don't hesitate to share your thoughts !

You have to transpose the projection matrix, this means the 3rd parameter of glUniformMatrix4fv has to be GL_TRUE:
glUniformMatrix4fv(glGetUniformLocation(shadID, "projection"),
1, GL_TRUE, addr projectionMatrix[0])
Or you have to initialize the matrix according to the specification (see below):
proc projection2D(left, right, bottom, top, far, near:float):Mat4x4 =
result = [OGLfloat(1), 0, 0, 0, # Start from an identity matrix.
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1]
# Orthographic projection, inspired from a Wikipedia article example.
result[0] = OGLfloat(2.0 / (right - left))
result[5] = OGLfloat(2.0 / (top - bottom))
result[12] = OGLfloat(- ((right + left) / (right - left)))
result[13] = OGLfloat(- ((top + bottom) / (top - bottom)))
result[10] = OGLfloat(-2 / (far - near))
result[14] = OGLfloat(-((far + near)/(far - near)))
See The OpenGL Shading Language 4.6, 5.4.2 Vector and Matrix Constructors, page 101:
To initialize a matrix by specifying vectors or scalars, the components are assigned to the matrix elements in column-major order.
mat4(float, float, float, float, // first column
float, float, float, float, // second column
float, float, float, float, // third column
float, float, float, float); // fourth column
Note, in compare to a mathematical matrix where the columns are written from top to bottom, which feels natural, at the initialization of an OpenGL matrix, the columns are written from the left to the right. This leads to the benefit, that the x, y, z components of an axis or of the translation are in direct succession in the memory.
See also Data Type (GLSL) - Matrix constructors

Related

pyopengl, texturecoordinates for vertex lists [duplicate]

I am working on a project and I need to use texture arrays to apply textures. I have asked many questions about this, none of which I got an answer I was completely satisfied with (Get access to later versions of GLSL , OpenGL: Access Array Texture in GLSL , and OpenGL: How would I implement texture arrays?) so I'm asking a more broad question to hopefully get a response. Anyways, How would I texture an object in OpenGL (PyOpenGL more specifically, but it's fine if you put your answer in C++). I already have a way to load the texture arrays, just not a way to apply it. This is the desired result:
Image from opengl-tutorial
and this is what I currently have for loading array textures:
def load_texture_array(path,width,height):
teximg = pygame.image.load(path)
texels = teximg.get_buffer().raw
texture = GLuint(0)
layerCount = 6
mipLevelCount = 1
glGenTextures(1, texture)
glBindTexture(GL_TEXTURE_2D_ARRAY, texture)
glTexStorage3D(GL_TEXTURE_2D_ARRAY, mipLevelCount, GL_RGBA8, width, height, layerCount)
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 0, width, height, layerCount, GL_RGBA, GL_UNSIGNED_BYTE, texels)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
TLDR: How would I apply textures to objects in OpenGL using texture arrays?
I will happily provide any other information if necessary.
If you want to use a 2D Array Texture for a cube, each of the 6 textures for the 6 side must be the same size.
You can lookup the texture by 3 dimensional texture coordinates. The 3rd component of the texture coordinate is the index of the 2d texture in the 2d texture array.
Hence the texture coordinates for the 6 sides are
0: [(0, 0, 0), (1, 0, 0), (1, 1, 0), (0, 1, 0)]
1: [(0, 0, 1), (1, 0, 1), (1, 1, 1), (0, 1, 1)]
2: [(0, 0, 2), (1, 0, 2), (1, 1, 2), (0, 1, 2)]
3: [(0, 0, 3), (1, 0, 3), (1, 1, 3), (0, 1, 3)]
4: [(0, 0, 4), (1, 0, 4), (1, 1, 4), (0, 1, 4)]
5: [(0, 0, 5), (1, 0, 5), (1, 1, 5), (0, 1, 5)]
Get the 3 dimensional texture coordinate attribute in the vertex shader and pass it to the fragment shader:
in a_uv;
out v_uv;
// [...]
void main()
{
v_uv = a_uv;
// [...]
}
Use the 3 dimensional texture coordinate to look up the sampler2DArray in the fragment shader:
out v_uv;
uniform sampler2DArray u_texture;
// [...]
void main()
{
vec4 texture(u_texture, v_uv.xyz);
// [...]
}
Create a GL_TEXTURE_2D_ARRAY and use glTexSubImage3D to load 6 2-dimensional images to the 6 planes of the 2D Array Texture. In the following image_planes is a list with the 6 2-dimensional image planes:
tex_obj = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D_ARRAY, self.tex_obj)
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA, sizeX, sizeY, 6, 0, GL_RGBA, GL_UNSIGNED_BYTE, None)
for i in range(6):
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, i, sizeX, sizeY, 1, GL_RGBA, GL_UNSIGNED_BYTE, image_planes[i])
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
See also PyGame and OpenGL 4.
Minimal example:
import os, math, ctypes
import glm
from OpenGL.GL import *
from OpenGL.GL.shaders import *
from OpenGL.arrays import *
import pygame
pygame.init()
image_path = r"images"
image_names = ["banana64.png", "apple64.png", "fish64.png", "rocket64.png", "ice64.png", "boomerang64.png"]
image_planes = [
(GLubyte * 4)(255, 0, 0, 255), (GLubyte * 4)(0, 255, 0, 255), (GLubyte * 4)(0, 0, 255, 255),
(GLubyte * 4)(255, 255, 0, 255), (GLubyte * 4)(0, 255, 255, 255), (GLubyte * 4)(255, 0, 255, 255)]
image_size = (1, 1)
for i, filename in enumerate(image_names):
try:
image = pygame.image.load(os.path.join(image_path, filename))
image_size = image.get_size()
image_planes[i] = pygame.image.tostring(image, 'RGBA')
except:
pass
class MyWindow:
__glsl_vert = """
#version 130
in vec3 a_pos;
in vec3 a_nv;
in vec3 a_uv;
out vec3 v_pos;
out vec3 v_nv;
out vec3 v_uv;
uniform mat4 u_proj;
uniform mat4 u_view;
uniform mat4 u_model;
void main()
{
mat4 model_view = u_view * u_model;
mat3 normal = mat3(model_view);
vec4 view_pos = model_view * vec4(a_pos.xyz, 1.0);
v_pos = view_pos.xyz;
v_nv = normal * a_nv;
v_uv = a_uv;
gl_Position = u_proj * view_pos;
}
"""
__glsl_frag = """
#version 130
out vec4 frag_color;
in vec3 v_pos;
in vec3 v_nv;
in vec3 v_uv;
uniform sampler2DArray u_texture;
void main()
{
vec3 N = normalize(v_nv);
vec3 V = -normalize(v_pos);
float ka = 0.1;
float kd = max(0.0, dot(N, V)) * 0.9;
vec4 color = texture(u_texture, v_uv.xyz);
frag_color = vec4(color.rgb * (ka + kd), color.a);
}
"""
def __init__(self, w, h):
self.__caption = 'OpenGL Window'
self.__vp_size = [w, h]
pygame.display.gl_set_attribute(pygame.GL_DEPTH_SIZE, 24)
self.__screen = pygame.display.set_mode(self.__vp_size, pygame.DOUBLEBUF| pygame.OPENGL)
self.__program = compileProgram(
compileShader( self.__glsl_vert, GL_VERTEX_SHADER ),
compileShader( self.__glsl_frag, GL_FRAGMENT_SHADER ),
)
self.___attrib = { a : glGetAttribLocation (self.__program, a) for a in ['a_pos', 'a_nv', 'a_uv'] }
print(self.___attrib)
self.___uniform = { u : glGetUniformLocation (self.__program, u) for u in ['u_model', 'u_view', 'u_proj'] }
print(self.___uniform)
v = [[-1,-1,1], [1,-1,1], [1,1,1], [-1,1,1], [-1,-1,-1], [1,-1,-1], [1,1,-1], [-1,1,-1]]
n = [[0,0,1], [1,0,0], [0,0,-1], [-1,0,0], [0,1,0], [0,-1,0]]
e = [[0,1,2,3], [1,5,6,2], [5,4,7,6], [4,0,3,7], [3,2,6,7], [1,0,4,5]]
t = [[0, 0], [1, 0], [1, 1], [0, 1]]
index_array = [si*4+[0, 1, 2, 0, 2, 3][vi] for si in range(6) for vi in range(6)]
attr_array = []
for si in range(len(e)):
for i, vi in enumerate(e[si]):
attr_array += [*v[vi], *n[si], *t[i], si]
self.__no_vert = len(attr_array) // 10
self.__no_indices = len(index_array)
vertex_attributes = (ctypes.c_float * len(attr_array))(*attr_array)
indices = (ctypes.c_uint32 * self.__no_indices)(*index_array)
self.__vao = glGenVertexArrays(1)
self.__vbo, self.__ibo = glGenBuffers(2)
glBindVertexArray(self.__vao)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.__ibo)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices, GL_STATIC_DRAW)
glBindBuffer(GL_ARRAY_BUFFER, self.__vbo)
glBufferData(GL_ARRAY_BUFFER, vertex_attributes, GL_STATIC_DRAW)
float_size = ctypes.sizeof(ctypes.c_float)
glVertexAttribPointer(self.___attrib['a_pos'], 3, GL_FLOAT, False, 9*float_size, None)
glVertexAttribPointer(self.___attrib['a_nv'], 3, GL_FLOAT, False, 9*float_size, ctypes.c_void_p(3*float_size))
glVertexAttribPointer(self.___attrib['a_uv'], 3, GL_FLOAT, False, 9*float_size, ctypes.c_void_p(6*float_size))
glEnableVertexAttribArray(self.___attrib['a_pos'])
glEnableVertexAttribArray(self.___attrib['a_nv'])
glEnableVertexAttribArray(self.___attrib['a_uv'])
glEnable(GL_DEPTH_TEST)
glUseProgram(self.__program)
glActiveTexture(GL_TEXTURE0)
sizeX, sizeY = image_size
self.tex_obj = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D_ARRAY, self.tex_obj)
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA, sizeX, sizeY, 6, 0, GL_RGBA, GL_UNSIGNED_BYTE, None)
for i in range(6):
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, i, sizeX, sizeY, 1, GL_RGBA, GL_UNSIGNED_BYTE, image_planes[i])
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
def run(self):
self.__starttime = 0
self.__starttime = self.elapsed_ms()
run = True
while run:
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
self.__mainloop()
pygame.display.flip()
pygame.quit()
def elapsed_ms(self):
return pygame.time.get_ticks() - self.__starttime
def __mainloop(self):
proj, view, model = glm.mat4(1), glm.mat4(1), glm.mat4(1)
aspect = self.__vp_size[0]/self.__vp_size[1]
proj = glm.perspective(glm.radians(90.0), aspect, 0.1, 10.0)
view = glm.lookAt(glm.vec3(0,-3,0), glm.vec3(0, 0, 0), glm.vec3(0,0,1))
angle1 = self.elapsed_ms() * math.pi * 2 / 5000.0
angle2 = self.elapsed_ms() * math.pi * 2 / 7333.0
model = glm.rotate(model, angle1, glm.vec3(1, 0, 0))
model = glm.rotate(model, angle2, glm.vec3(0, 1, 0))
glUniformMatrix4fv(self.___uniform['u_proj'], 1, GL_FALSE, glm.value_ptr(proj) )
glUniformMatrix4fv(self.___uniform['u_view'], 1, GL_FALSE, glm.value_ptr(view) )
glUniformMatrix4fv(self.___uniform['u_model'], 1, GL_FALSE, glm.value_ptr(model) )
glClearColor(0.2, 0.3, 0.3, 1.0)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glDrawElements(GL_TRIANGLES, self.__no_indices, GL_UNSIGNED_INT, None)
window = MyWindow(800, 600)
window.run()

Opengl vertex color always leads to a black triangle

My first triangle works well if the color is specified in the fragment shader. When I add a vertex color to the buffer the triangle is always black.
A minimal code with the issue:
type
OGLfloat = float32
OGLuint = uint32
OGLint = int32
const
GLFW_CONTEXT_VERSION_MAJOR = 0x00022002
GLFW_CONTEXT_VERSION_MINOR = 0x00022003
GLFW_OPENGL_PROFILE = 0x00022008
GLFW_OPENGL_CORE_PROFILE = 0x00032001
const
GL_COLOR_BUFFER_BIT = 0x00004000
GL_DEPTH_BUFFER_BIT = 0x00000100
GL_ACCUM_BUFFER_BIT = 0x00000200
GL_STENCIL_BUFFER_BIT = 0x00000400
GL_ARRAY_BUFFER = 0x8892
GL_ELEMENT_ARRAY_BUFFER = 0x8893
GL_FALSE = 0.char
GL_STATIC_DRAW = 0x88E4
GL_FLOAT = 0x1406
GL_VERTEX_SHADER = 0x8B31
GL_COMPILE_STATUS = 0x8B81
GL_INFO_LOG_LENGTH = 0x8B84
GL_FRAGMENT_SHADER = 0x8B30
GL_LINK_STATUS = 0x8B82
GL_TRIANGLES = 0x0004
GL_UNSIGNED_INT= 0x1405
GL_VERSION = 0x1F02
const
POSITION_LENGTH = 3
COLOR_LENGTH = 3
const
WINDOW_W = 640
WINDOW_H = 480
var
colorDataOffset = 3 * sizeof(OGLfloat)
vertices = #[OGLfloat(0.0), 0.5, 0, 1, 1, 1,
0.5, -0.5, 0, 0, 0, 0,
-0.5, -0.5, 0, 0, 0, 0]
indices = #[OGLuint(0), 1 , 2]
# initialisation using glfw and glew
var glfwErr = glfwInit()
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3)
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3)
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE)
var winHandle = glfwCreateWindow(WINDOW_W, WINDOW_H)
glfwMakeContextCurrent(winHandle)
var glewErr = glewInit()
# shaders sources
var
shadID:OGLuint
vertSrc:cstring = """
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
out vec3 inColor;
void main()
{
gl_Position = vec4(aPos, 1.0f);
inColor = aColor;
}
"""
fragSrc:cstring = """
#version 330 core
out vec4 FragColor;
in vec3 inColor;
void main()
{
FragColor = vec4(inColor, 1.0f);
}
"""
# create the shader program
proc send_src(vert:var cstring, frag:var cstring):OGLuint =
var success:OGLint
# vertex
var vertexShader = glCreateShader(GL_VERTEX_SHADER)
glShaderSource(vertexShader, 1, addr vert, nil)
glCompileShader(vertexShader)
# Check compilation errors.
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, addr success)
if bool(success) == false:
echo(" vertex shader compilation failed (send_src)")
else:
echo("vertexShader compiled (send_src)")
# fragment
var fragmentShader = glCreateShader(GL_FRAGMENT_SHADER)
glShaderSource(fragmentShader, 1, addr frag, nil)
glCompileShader(fragmentShader)
# Check compilation errors.
glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, addr success)
if bool(success) == false:
echo("fragment shader compilation failed (send_src)"
else:
echo("fragmentShader compiled (send_src)")
# Shader program
result = glCreateProgram()
glAttachShader(result, vertexShader)
glAttachShader(result, fragmentShader)
glLinkProgram(result)
# Check for linkage errors.
glGetProgramiv(result, GL_LINK_STATUS, addr success)
if success == 0:
echo ("program linking failed (send_src)")
else:
echo ("shader linked (send_src)")
glDeleteShader(vertexShader)
glDeleteShader(fragmentShader)
glViewport(0, 0, WINDOW_W, WINDOW_H)
shadID = send_src(vertSrc, fragSrc)
var VAO, VBO, EBO:OGLuint
glGenVertexArrays(1, addr VAO)
glGenBuffers(1, addr VBO)
glGenBuffers(1, addr EBO)
glBindVertexArray(VAO)
glBindBuffer(GL_ARRAY_BUFFER, VBO)
glBufferData(GL_ARRAY_BUFFER, vertices.len * sizeof(OGLfloat),
addr vertices[0], GL_STATIC_DRAW)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.len * sizeof(OGLuint),
addr indices[0], GL_STATIC_DRAW)
# Position layout
glVertexAttribPointer(0, POSITION_LENGTH, GL_FLOAT, GL_FALSE, 6 * sizeof(OGLfloat),
nil)
glEnableVertexAttribArray(0)
# Color layout
glVertexAttribPointer(1, COLOR_LENGTH, GL_FLOAT, GL_FALSE, 6 * sizeof(OGLfloat),
addr colorDataOffset)
glEnableVertexAttribArray(1)
glBindBuffer(GL_ARRAY_BUFFER, 0)
glBindVertexArray(0)
glUseProgram(shadID)
while bool(glfwWindowShouldClose(winHandle)) == false:
glClearColor(0.2, 0.3, 0.3, 1.0)
glClear(GL_COLOR_BUFFER_BIT)
glBindVertexArray(VAO)
glDrawElements(GL_TRIANGLES, OGLint(indices.len), GL_UNSIGNED_INT, nil)
glfwSwapBuffers(winHandle)
glfwPollEvents()
glDeleteVertexArrays(1, addr VAO)
glDeleteBuffers(1, addr VBO)
glDeleteBuffers(1, addr EBO)
glfwDestroyWindow(winHandle)
glfwTerminate()
First see OpenGL 4.6 API Compatibility Profile Specification; 10.3.9 Vertex Arrays in Buffer Objects; page 409
... When an array is sourced from a buffer object for a vertex attribute, [...] the offset set for the vertex buffer [...] is used as the offset in basic machine units of the first element in that buffer’s data store.
This means that the 6th parameter of glVertexAttribPointer has to be the offset of the color attribute, encoded to a pointer.
When you do addr colorDataOffset in
glVertexAttribPointer(1, COLOR_LENGTH, GL_FLOAT, GL_FALSE, 6 * sizeof(OGLfloat),
addr colorDataOffset)
then you create a pointer, but the value of the pointer is the address of colorDataOffset - see The addr operator.
What you have to do is to cast colorDataOffset to an pointer type by cast[pointer](colorDataOffset), the value of the pointer has to be the value of colorDataOffset - see Type casts and Types:
glVertexAttribPointer(1, COLOR_LENGTH, GL_FLOAT, GL_FALSE, 6 * sizeof(OGLfloat),
cast[pointer](colorDataOffset))

My Opengl triangle has unexpected vertex colors

Hello I drawn an opengl triangle that worked fine with 3 floats color vertex attributes. The alpha one was in the shader.
Now in this version I'm trying to send color attributes with 4 floats. But the colors are wierd and the third vertex is always black.
The programming language is Nim.
type
OGLfloat = float32
OGLuint = uint32
OGLint = int32
# Glfw3 and Opengl constants
const
GLFW_CONTEXT_VERSION_MAJOR = 0x00022002
GLFW_CONTEXT_VERSION_MINOR = 0x00022003
GLFW_OPENGL_PROFILE = 0x00022008
GLFW_OPENGL_CORE_PROFILE = 0x00032001
const
GL_COLOR_BUFFER_BIT = 0x00004000
GL_DEPTH_BUFFER_BIT = 0x00000100
GL_ACCUM_BUFFER_BIT = 0x00000200
GL_STENCIL_BUFFER_BIT = 0x00000400
GL_ARRAY_BUFFER = 0x8892
GL_ELEMENT_ARRAY_BUFFER = 0x8893
GL_FALSE = 0.char
GL_STATIC_DRAW = 0x88E4
GL_FLOAT = 0x1406
GL_VERTEX_SHADER = 0x8B31
GL_COMPILE_STATUS = 0x8B81
GL_INFO_LOG_LENGTH = 0x8B84
GL_FRAGMENT_SHADER = 0x8B30
GL_LINK_STATUS = 0x8B82
GL_TRIANGLES = 0x0004
GL_UNSIGNED_INT= 0x1405
GL_VERSION = 0x1F02
# My own constants
const
POSITION_LENGTH = 3.OGLint
COLOR_LENGTH = 4.OGLint
const
WINDOW_W = 640
WINDOW_H = 480
let
colorDataOffset = COLOR_LENGTH * OGLint(sizeof(OGLfloat))
# I don't pasted the opengl imports here to save some space
# Opengl imports...
var
#I expect a black triangle but the two first vertices are blue on screen.
vertices = #[OGLfloat(0.0), 0.5, 0, 0, 0, 0, 1,
0.5, -0.5, 0, 0, 0, 0, 1,
-0.5, -0.5, 0, 0, 0, 0, 1]
indices = #[OGLuint(0), 1 , 2]
var glfwErr = glfwInit()
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3)
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3)
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE)
var winHandle = glfwCreateWindow(WINDOW_W, WINDOW_H)
glfwMakeContextCurrent(winHandle)
var glewErr = glewInit()
var
shadID:OGLuint
vertSrc:cstring = """
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec4 aColor;
out vec4 vColor;
void main()
{
gl_Position = vec4(aPos, 1.0f);
vColor = aColor;
}
"""
fragSrc:cstring = """
#version 330 core
out vec4 FragColor;
in vec4 vColor;
void main()
{
FragColor = vColor;
}
"""
proc send_src(vert:var cstring, frag:var cstring):OGLuint =
var success:OGLint
# vertex
var vertexShader = glCreateShader(GL_VERTEX_SHADER)
glShaderSource(vertexShader, 1, addr vert, nil)
glCompileShader(vertexShader)
# Check compilation errors.
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, addr success)
if bool(success) == false:
echo(" vertex shader compilation failed (send_src)")
else:
echo("vertexShader compiled (send_src)")
# fragment
var fragmentShader = glCreateShader(GL_FRAGMENT_SHADER)
glShaderSource(fragmentShader, 1, addr frag, nil)
glCompileShader(fragmentShader)
# Check compilation errors.
glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, addr success)
if bool(success) == false:
echo("fragment shader compilation failed (send_src)")
else:
echo("fragmentShader compiled (send_src)")
# Shader program
result = glCreateProgram()
glAttachShader(result, vertexShader)
glAttachShader(result, fragmentShader)
glLinkProgram(result)
# Check for linkage errors.
glGetProgramiv(result, GL_LINK_STATUS, addr success)
if success == 0:
echo("program linking failed (send_src)")
else:
echo("shader linked (send_src)")
glDeleteShader(vertexShader)
glDeleteShader(fragmentShader)
glViewport(0, 0, WINDOW_W, WINDOW_H)
shadID = send_src(vertSrc, fragSrc)
var VAO, VBO, EBO:OGLuint
glGenVertexArrays(1, addr VAO)
glGenBuffers(1, addr VBO)
glGenBuffers(1, addr EBO)
glBindVertexArray(VAO)
glBindBuffer(GL_ARRAY_BUFFER, VBO)
glBufferData(GL_ARRAY_BUFFER, vertices.len * sizeof(OGLfloat),
addr vertices[0], GL_STATIC_DRAW)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.len * sizeof(OGLuint),
addr indices[0], GL_STATIC_DRAW)
# Position layout
glVertexAttribPointer(0, POSITION_LENGTH, GL_FLOAT, GL_FALSE, (POSITION_LENGTH + COLOR_LENGTH) * OGLint(sizeof(OGLfloat)),
nil)
glEnableVertexAttribArray(0)
# Color layout
glVertexAttribPointer(1, COLOR_LENGTH, GL_FLOAT, GL_FALSE, (POSITION_LENGTH + COLOR_LENGTH) * OGLint(sizeof(OGLfloat)),
cast[pointer](colorDataOffset))
glEnableVertexAttribArray(1)
glBindBuffer(GL_ARRAY_BUFFER, 0)
glBindVertexArray(0)
glUseProgram(shadID)
while bool(glfwWindowShouldClose(winHandle)) == false:
glClearColor(0.2, 0.3, 0.3, 1.0)
glClear(GL_COLOR_BUFFER_BIT)
glBindVertexArray(VAO)
glDrawElements(GL_TRIANGLES, OGLint(indices.len), GL_UNSIGNED_INT, nil)
glfwSwapBuffers(winHandle)
glfwPollEvents()
glDeleteVertexArrays(1, addr VAO)
glDeleteBuffers(1, addr VBO)
glDeleteBuffers(1, addr EBO)
glfwDestroyWindow(winHandle)
glfwTerminate()
I didn't find what is wrong.
To set up the pointer for tyour color attribute, you are using:
glVertexAttribPointer(1, COLOR_LENGTH, GL_FLOAT, GL_FALSE, (POSITION_LENGTH + COLOR_LENGTH) * OGLint(sizeof(OGLfloat)),
cast[pointer](colorDataOffset))
So it means that at byte offset colorDataOffset, you first color attribute begins.
Since your vertex format is (3*4 bytes position | 4*4 bytes color), the correct offset would be 12, so to skip the position part of the very first vertex. However, you set it to:
colorDataOffset = COLOR_LENGTH * OGLint(sizeof(OGLfloat))
which should evaluate to 16, so you actually mix it with the position data of the next vertex.
You need to use POSITION_LENGTH here...

Cannot Read Values Passed to Vertex Shader

I am trying to wrap my head around the various types of GLSL shaders in OpenGL.
At the moment I am struggling with a 2d layered-tile implementation. For some reason the int values that get passed into my shader are always 0 (or more likely, null).
I currently have a 2048x2048px 2d texture composed of 20x20 tiles. I am trying to texture one quad with it and change the index of the tile based upon the block of ints I pass into the vertex shader.
I am passing in a vec2 of floats for the position of the quad (really a TRIANGLE_STRIP). I am also attempting to pass in 6 ints that will represent the 6 layers of tiles.
My input:
// Build and compile our shader program
Shader ourShader("b_vertex.vertexShader", "b_fragment.fragmentShader");
const int floatsPerPosition = 2;
const int intsPerTriangle = 6;
const int numVertices = 4;
const int sizeOfPositions = sizeof(float) * numVertices * floatsPerPosition;
const int sizeOfColors = sizeof(int) * numVertices * intsPerTriangle;
const int numIndices = 4;
const int sizeOfIndices = sizeof(int) * numIndices;
float positions[numVertices][floatsPerPosition] =
{
{ -1, 1 },
{ -1, -1 },
{ 1, 1 },
{ 1, -1 },
};
// ints indicating Tile Index
int colors[numVertices][intsPerTriangle] =
{
{ 1, 2, 3, 4, 5, 6 },
{ 1, 2, 3, 4, 5, 6 },
{ 1, 2, 3, 4, 5, 6 },
{ 1, 2, 3, 4, 5, 6 },
};
// Indexes on CPU
int indices[numVertices] =
{
0, 1, 2, 3,
};
My setup:
GLuint vao, vbo1, vbo2, ebo; // Identifiers of OpenGL objects
glGenVertexArrays(1, &vao); // Create new VAO
// Binded VAO will store connections between VBOs and attributes
glBindVertexArray(vao);
glGenBuffers(1, &vbo1); // Create new VBO
glBindBuffer(GL_ARRAY_BUFFER, vbo1); // Bind vbo1 as current vertex buffer
// initialize vertex buffer, allocate memory, fill it with data
glBufferData(GL_ARRAY_BUFFER, sizeOfPositions, positions, GL_STATIC_DRAW);
// indicate that current VBO should be used with vertex attribute with index 0
glEnableVertexAttribArray(0);
// indicate how vertex attribute 0 should interpret data in connected VBO
glVertexAttribPointer(0, floatsPerPosition, GL_FLOAT, GL_FALSE, 0, 0);
glGenBuffers(1, &vbo2); // Create new VBO
glBindBuffer(GL_ARRAY_BUFFER, vbo2); // Bind vbo2 as current vertex buffer
// initialize vertex buffer, allocate memory, fill it with data
glBufferData(GL_ARRAY_BUFFER, sizeOfColors, colors, GL_STATIC_DRAW);
// indicate that current VBO should be used with vertex attribute with index 1
glEnableVertexAttribArray(1);
// indicate how vertex attribute 1 should interpret data in connected VBO
glVertexAttribPointer(1, intsPerTriangle, GL_INT, GL_FALSE, 0, 0);
// Create new buffer that will be used to store indices
glGenBuffers(1, &ebo);
// Bind index buffer to corresponding target
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
// ititialize index buffer, allocate memory, fill it with data
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeOfIndices, indices, GL_STATIC_DRAW);
// reset bindings for VAO, VBO and EBO
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// Load and create a texture
GLuint texture1 = loadBMP_custom("uvtemplate3.bmp");
GLuint texture2 = loadBMP_custom("texture1.bmp");
My draw:
// Game loop
while (!glfwWindowShouldClose(window))
{
// Check if any events have been activiated (key pressed, mouse moved etc.) and call corresponding response functions
glfwPollEvents();
// Render
// Clear the colorbuffer
glClearColor(1.f, 0.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Activate shader
ourShader.Use();
// Bind Textures using texture units
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
//add some cool params
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
float borderColor[] = { 0.45f, 0.25f, 0.25f, 0.25f };
glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, borderColor);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture1"), 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture2"), 1);
// Draw container
//glBindVertexArray(VAO);
//glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(vao);
//glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glDrawElements(GL_TRIANGLE_STRIP, numIndices, GL_UNSIGNED_INT, NULL);
glBindVertexArray(0);
// Swap the screen buffers
glfwSwapBuffers(window);
}
My shader most definitely works, as I can adjust the output by hard-coding the
values from within the vertexShader. My suspicion is I am not passing the values correctly/ in the correct format or not indicating somewhere that the int[6] needs to be included per vertex.
I cannot read anything from my layout (location = 1) in int Base[6]; I've tried just about everything I can think of. Declaring each int individually, trying to read two ivec3's, uint and what ever else I could think of but everything comes back with 0.
The following are my vertex and fragment shader for completeness:
#version 330 core
layout (location = 0) in vec2 position;
layout (location = 1) in int Base[6];
out vec2 TexCoord;
out vec2 TexCoord2;
out vec2 TexCoord3;
out vec2 TexCoord4;
out vec2 TexCoord5;
out vec2 TexCoord6;
// 0.5f, 0.5f,// 0.0f, 118.0f, 0.0f, 0.0f, 0.0f, 0.0f, // Top Right
// 0.5f, -0.5f,// 0.0f, 118.0f, 1.0f, 0.0f, 0.0f,0.009765625f, // Bottom Right
// -0.5f, -0.5f,// 0.0f, 118.0f, 0.0f, 1.0f, 0.009765625f, 0.009765625f, // Bottom Left
// -0.5f, 0.5f//, 0.0f, 118.0f, 1.0f, 0.0f, 0.009765625f, 0.0f // Top Left
void main()
{
int curBase = Base[5];
int curVertex = gl_VertexID % 4;
vec2 texCoord = (curVertex == 0?
vec2(0.0,0.0):(
curVertex == 1?
vec2(0.0,0.009765625):(
curVertex == 2?
vec2(0.009765625,0.0):(
curVertex == 3?
vec2(0.009765625,0.009765625):(
vec2(0.0,0.0)))))
);
gl_Position = vec4(position, 0.0f, 1.0f);
TexCoord = vec2(texCoord.x + ((int(curBase)%102)*0.009765625f)
, (1.0 - texCoord.y) - ((int(curBase)/102)*0.009765625f));
//curBase = Base+1;
TexCoord2 = vec2(texCoord.x + ((int(curBase)%102)*0.009765625f)
, (1.0 - texCoord.y) - ((int(curBase)/102)*0.009765625f));
//curBase = Base+2;
TexCoord3 = vec2(texCoord.x + ((int(curBase)%102)*0.009765625f)
, (1.0 - texCoord.y) - ((int(curBase)/102)*0.009765625f));
}
Fragment:
#version 330 core
//in vec3 ourColor;
in vec2 TexCoord;
in vec2 TexCoord2;
in vec2 TexCoord3;
in vec2 TexCoord4;
in vec2 TexCoord5;
in vec2 TexCoord6;
out vec4 color;
// Texture samplers
uniform sampler2D ourTexture1;
uniform sampler2D ourTexture2;
void main()
{
color = (texture(ourTexture2, TexCoord )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord2 )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord3 )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord4 )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord5 )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord6 )== vec4(1.0,0.0,1.0,1.0)?
vec4(0.0f,0.0f,0.0f,0.0f)
:texture(ourTexture2, TexCoord6 ))
:texture(ourTexture2, TexCoord5 ))
:texture(ourTexture2, TexCoord4 ))
:texture(ourTexture2, TexCoord3 ))
:texture(ourTexture2, TexCoord2 ))
:texture(ourTexture2, TexCoord ));
}
This is wrong in two different ways:
glVertexAttribPointer(1, intsPerTriangle, GL_INT, GL_FALSE, 0, 0);
Vertex attributes in the GL can be scalars or vectors of 2 to 4 components. Hence, the size parameter of glVertexAttribPointer can take the values of 1, 2, 3 or 4. Using a different value (intsPerTriangle == 6) means that the call will just generate an GL_INVALID_VALUE error and has no ther effect, so you don't even set a pointer.
If you you want to pass 6 values per vertex, you can either use 6 different scalr attributes (consuming 6 attribute slots), or pack this into some vectors, like 2 3d vectors (consuming only 2 slots). No matter what packing you chose, you'll need a proper attrib pointer setup for each attribute slot in use.
However, glVertexAttribPointer is also the wrong function for your use case. It is defining floating-point attributes, which musthave matching declarations as float/vec* in the shader. The fact that you can input GL_INT just means that the GPU can do the conversion to floating-point on the fly for you.
If you want to use an int or ivec (or their unsigned counterparts) attribute, you have to use glVertexAttribIPointer (note the I in that function name) when setting up the attribute.

Texturing a box in OpenGL 4.5 - (doen't work)

I have drawn a box. I added the basic class that deals vao and vbo.
class BasicObject_V2
{
public:
BasicObject_V2(GLuint typeOfPrimitives = 0){
this->typeOfPrimitives = typeOfPrimitives;
switch (this->typeOfPrimitives){
case GL_QUADS: numVertsPerPrimitive = 4; break;
}}
void allocateMemory(){
glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);
glGenBuffers(1, &vboID);
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(vertexAttributes), &vertices[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 48, (const void*)0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 48, (const void*)12);
glVertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, 48, (const void*)24);
glVertexAttribPointer(3, 2, GL_FLOAT, GL_FALSE, 48, (const void*)40);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glEnableVertexAttribArray(3);
}
GLuint vboID;
GLuint vaoID;
GLuint typeOfPrimitives;
GLuint numVertsPerPrimitive;
int getNumberOfVertices(){
return vertices.size();
}
struct vertexAttributes{
glm::vec3 pos;
glm::vec3 normal;
glm::vec4 color;
glm::vec2 texCoord;
};
std::vector<vertexAttributes> vertices;
};
Here is the my Box-class.
class Box_V2 : public BasicObject_V2 {
public:
Box_V2(float width = 1, float height = 1, float length = 1) : BasicObject_V2(GL_QUADS) {
float wHalf = width / 2.0f;
float hHalf = height / 2.0f;
float lHalf = length / 2.0f;
vertexAttributes va;
glm::vec3 corners[8];
corners[0] = glm::vec3(-wHalf, -hHalf, lHalf);
corners[1] = (...)
glm::vec3 normals[6];
normals[0] = glm::vec3(0, 0, 1); // front
//(...)
// FRONT
va.pos = corners[0];
va.normal = normals[0];
va.texCoord = glm::vec2(0.0, 0.0);
vertices.push_back(va);
//...
allocateMemory();
}
};
Also I have set the texture for it (acoording to the opengl superbible example), but it does work at all.
static GLubyte checkImage[128][128][4];
void makeCheckImage(void)
{
int i, j, c;
for (i = 0; i<128; i++) {
for (j = 0; j<128; j++) {
c = ((((i & 0x8) == 0) ^ ((j & 0x8)) == 0)) * 255;
checkImage[i][j][0] = (GLubyte)c;
checkImage[i][j][1] = (GLubyte)c;
checkImage[i][j][2] = (GLubyte)c;
checkImage[i][j][3] = (GLubyte)255;
}}}
void init(){
glClearColor(0.7, 0.4, 0.6, 1);
//Create a Texture
makeCheckImage();
//Texure
glGenTextures(4, tex);
//glCreateTextures(GL_TEXTURE_2D, 4, tex);//4.5
glTextureStorage2D(tex[0], 0, GL_RGBA32F, 2, 2);
glBindTexture(GL_TEXTURE_2D, tex[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2, 2, 0, GL_RGBA, GL_FLOAT, checkImage);
//glTextureSubImage2D(tex[0], 0, 0, 0, 2, 2, GL_RGBA, GL_FLOAT, checkImage);//4.5
//load the shader
programID = loadShaders("vertex.vsh", "fragment.fsh");
glUseProgram(programID);
//create a box
box_v2 = new Box_V2();
glBindTexture(GL_TEXTURE_2D, tex[0]);
}}
void drawing(BasicObject* object){
glBindVertexArray(object->vaoID);
glDrawArrays(object->typeOfPrimitives, 0, object->getNumberOfVertices() * object->numVertsPerPrimitive);
glBindVertexArray(0);
}
This is my shaders:
VS:
#version 450 core
layout(location = 0) in vec3 pos;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec4 color;
layout(location = 3) in vec2 texCoord;
uniform mat4 rotate;(...)
out vec4 v_normal;
out vec4 v_color;
out vec4 v_pos;
out vec2 v_texCoord;
void main(){
v_normal = rotate*vec4(normal,1);//*model;
v_color=color;
v_texCoord=texCoord;
gl_Position = rotate*scale*trans*vec4(pos.xyz, 1);
}
And FS:
#version 450 core
in vec4 v_color;
in vec2 v_texCoord;
uniform sampler2D ourTexture;
out vec4 outColor;
void main(){
//outColor=texelFetch(ourTexture, ivec2(gl_FragCoord.xy), 0); //4.5
outColor=texture(ourTexture,v_texCoord*vec2(3.0, 1.0));
}
Now it draws a black box without any texture on it. I'm new in OpenGL and I can't find where is the problem.
There are a lot of things wrong with your code. It looks like you tried to write it one way, then tried to write it another way, leading to some kind of Frankenstein's code where it's not really one way or the other.
glGenTextures(4, tex);
//glCreateTextures(GL_TEXTURE_2D, 4, tex);//4.5
glTextureStorage2D(tex[0], 0, GL_RGBA32F, 2, 2);
This is a perfect example. You use the non-DSA glGenTextures call, but you immediately turn around and use the DSA glTextureStorage2D call on it. You cannot do that. Why? Because tex[0] hasn't been created yet.
glGenTextures only allocates the name for the texture, not its state data. Only by binding the texture does it get contents. That's why DSA introduced the glCreate* functions; they allocate both the name and state for the object. This is also why glCreateTextures takes a target; that target is part of the texture object's state.
In short, the commented out call was correct all along. Looking at your code as written, the glTextureStorage2D call fails with an OpenGL error (FYI: turn on KHR_debug to see when errors happen).
So let's look at what your code would look like if we take out the failure parts:
glGenTextures(4, tex);
glBindTexture(GL_TEXTURE_2D, tex[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2, 2, 0, GL_RGBA, GL_FLOAT, checkImage);
There are two problems here. First, as BDL mentioned, your data is described incorrectly. You're passing GLubytes, but you told OpenGL you were passing GLfloats. Don't lie to OpenGL.
Second, there's the internal format. Because you used an unsized internal format (and because you're not in OpenGL ES land), the format your implementation chooses will be unsigned normalized. It won't be a 32-bit floating point format. If you want one of those, you must explicitly ask for it, with GL_RGBA32F.
Furthermore, because you allocated mutable storage for your texture (ie: because glTextureStorage2D failed), you now have a problem. You only allocated one mipmap level, but the default filtering mode will try to fetch from multiple mipmaps. And you never set the texture's mipmap range to only sample from the first. So your texture is not complete. An immutable storage texture would have made the mipmap range match what you allocated, but mutable storage textures aren't so nice.
You should always set your mipmap range when creating mutable storage textures:
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_MAX_LEVEL, 0); //It's a closed range.
Lastly, you bind the texture to unit 0 for rendering (mostly by accident, since you never call glActiveTexture). But you never bother to inform your shader of this, by setting the ourTexture uniform value to 0.
There are several Problems with your code:
Data type doesn't match data
The data comes in an unsigned byte format (GLubyte checkImage[128][128][4]) with a size of of 4 bytes per pixel (1 byte per channel), but you tell OpenGL that it is in GL_FLOAT format, which would require 16 bytes per pixel (4 bytes per channel).
Data storage
See answer from 246Nt.
Mipmaps/Filters
Per default OpenGL has the minification filter set to GL_LINEAR_MIPMAP_LINEAR which requries the user to supply valid mipmaps for the texture. One can either change this filter to GL_LINEAR or GL_NEAREST which do not require mipmaps (see glTexParameteri), or generate mipmaps (glGenerateMipmaps).