This is what happens when I draw switching from the black texture to the lime green one in a simple for loop. It seems to have bits from the previously drawn texture.
Here's a simplified version of how my renderer works
Init(): Create my VAO and attrib pointers and generate element buffer and indicies
Begin(): Bind my vertex buffer and map the buffer pointer
Draw(): Submit a renderable to draw which gets 4 vertecies in the vertex buffer each get a position, color, texCoords, and a Texture Slot
End(): I delete the buffer pointer, bind my VAO, IBO, and textures to their active texture slots and draw the elements.
I do this every frame (except init). What I don't understand is if I draw PER TEXTURE, only having one active then this doesn't happen. It's when I have multiple active textures and they are bound.
Here's my renderer
void Renderer2D::Init()
{
m_Textures.reserve(32);
m_VertexBuffer.Create(nullptr, VERTEX_BUFFER_SIZE);
m_Layout.PushFloat(2); //Position
m_Layout.PushUChar(4); //Color
m_Layout.PushFloat(2); //TexCoords
m_Layout.PushFloat(1); //Texture ID
//VA is bound and VB is unbound
m_VertexArray.AddBuffer(m_VertexBuffer, m_Layout);
unsigned int* indices = new unsigned int[INDEX_COUNT];
int offset = 0;
for (int i = 0; i < INDEX_COUNT; i += 6)
{
indices[i + 0] = offset + 0;
indices[i + 1] = offset + 1;
indices[i + 2] = offset + 2;
indices[i + 3] = offset + 2;
indices[i + 4] = offset + 3;
indices[i + 5] = offset + 0;
offset += 4;
}
m_IndexBuffer.Create(indices, INDEX_COUNT);
m_VertexArray.Unbind();
}
void Renderer2D::Begin()
{
m_VertexBuffer.Bind();
m_Buffer = (VertexData*)m_VertexBuffer.GetBufferPointer();
}
void Renderer2D::Draw(Renderable2D& renderable)
{
const glm::vec2& position = renderable.GetPosition();
const glm::vec2& size = renderable.GetSize();
const Color& color = renderable.GetColor();
const glm::vec4& texCoords = renderable.GetTextureRect();
const float tid = AddTexture(renderable.GetTexture());
DT_CORE_ASSERT(tid != 0, "TID IS EQUAL TO ZERO");
m_Buffer->position = glm::vec2(position.x, position.y);
m_Buffer->color = color;
m_Buffer->texCoord = glm::vec2(texCoords.x, texCoords.y);
m_Buffer->tid = tid;
m_Buffer++;
m_Buffer->position = glm::vec2(position.x + size.x, position.y);
m_Buffer->color = color;
m_Buffer->texCoord = glm::vec2(texCoords.z, texCoords.y);
m_Buffer->tid = tid;
m_Buffer++;
m_Buffer->position = glm::vec2(position.x + size.x, position.y + size.y);
m_Buffer->color = color;
m_Buffer->texCoord = glm::vec2(texCoords.z, texCoords.w);
m_Buffer->tid = tid;
m_Buffer++;
m_Buffer->position = glm::vec2(position.x, position.y + size.y);
m_Buffer->color = color;
m_Buffer->texCoord = glm::vec2(texCoords.x, texCoords.w);
m_Buffer->tid = tid;
m_Buffer++;
m_IndexCount += 6;
}
void Renderer2D::End()
{
Flush();
}
const float Renderer2D::AddTexture(const Texture2D* texture)
{
for (int i = 0; i < m_Textures.size(); i++) {
if (texture == m_Textures[i]) // Compares memory addresses
return i + 1; // Returns the texture id plus one since 0 is null texture id
}
// If the texture count is already at or greater than max textures
if (m_Textures.size() >= MAX_TEXTURES)
{
End();
Begin();
}
m_Textures.push_back((Texture2D*)texture);
return m_Textures.size();
}
void Renderer2D::Flush()
{
m_VertexBuffer.DeleteBufferPointer();
m_VertexArray.Bind();
m_IndexBuffer.Bind();
for (int i = 0; i < m_Textures.size(); i++) {
glActiveTexture(GL_TEXTURE0 + i);
m_Textures[i]->Bind();
}
glDrawElements(GL_TRIANGLES, m_IndexCount, GL_UNSIGNED_INT, NULL);
m_IndexBuffer.Unbind();
m_VertexArray.Unbind();
m_IndexCount = 0;
m_Textures.clear();
}
Here's my fragment shader
#version 330 core
out vec4 FragColor;
in vec4 ourColor;
in vec2 ourTexCoord;
in float ourTid;
uniform sampler2D textures[32];
void main()
{
vec4 texColor = ourColor;
if(ourTid > 0.0)
{
int tid = int(ourTid - 0.5);
texColor = ourColor * texture(textures[tid], ourTexCoord);
}
FragColor = texColor;
}
I appreciate any help, let me know if you need to see more code
i don't know if you need this anymore but for the the record
you have a logical problem in your fragment code
let's think if your "ourTid" bigger than 0 let's take 1.0f for example
you subtract 0.5f , we cast it to int(0.5) it's 0 for sure now let's say that we need the texture number 2 and do the same process 2-0.5 = 1.5 "cast it to int" = 1
definitely you will have the previous texture every time
now the solution is easy you should add 0.5 instead of subtract it to be sure that the numbers interpolation is avoided and you got the correct texture.
Related
I am writing a program, which draws the Mandelbrot set. For every pixel, I run a function and it returns an activation number between 0 and 1. Currently, this is done in a fragment shader and activation is my color.
But Imagine you zoom in on the fractal, suddenly all the activations you can see on the screen are between .87 and .95. You can't see the difference very well.
I am looking for a way to first calculate all the activations and store them in an array then based on that array choose the colors. Both of those need to run on the GPU for performance reasons.
So you need to find minimum and maximum intensity of a picture you've rendered. This cannot be done in a single draw, since these values are nonlocal. A possible way to do this is to recursively apply a pipeline that downscales an image in half, computing the minimum and maximum values of 2x2 squares and storing them e.g. in an RG texture (kind of mipmap generation, with min/max instead of averaging colours). In the end you have a 1x1 texture which contains the minimal and maximal values of your image in its only pixel. You can sample this texture in the final render that maps activation values to colours.
I solved my issue by creating a new gll program and attaching a compute shader to it.
unsigned int vs = CompileShader(vertShaderStr, GL_VERTEX_SHADER);
unsigned int fs = CompileShader(fragShaderStr, GL_FRAGMENT_SHADER);
unsigned int cs = CompileShader(compShaderStr, GL_COMPUTE_SHADER);
glAttachShader(mainProgram, vs);
glAttachShader(mainProgram, fs);
glAttachShader(computeProgram, cs);
glLinkProgram(computeProgram);
glValidateProgram(computeProgram);
glLinkProgram(mainProgram);
glValidateProgram(mainProgram);
glUseProgram(computeProgram);
Then, in the Render loop I switch programs and run the compute shader.
glUseProgram(computeProgram);
glDispatchCompute(resolutionX, resolutionY, 1);
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(mainProgram);
/* Drawing the whole screen using the shader */
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
/* Poll for and process events */
glfwPollEvents();
updateBuffer();
Update();
/* Swap front and back buffers */
glfwSwapBuffers(window);
I pass the data from compute shader to fragment shader via shader storage buffer.
void setupBuffer() {
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glNamedBufferStorage(ssbo, sizeof(float) * (resolutionX * resolutionY +
SH_EXTRA_FLOATS), &data, GL_MAP_WRITE_BIT | GL_MAP_READ_BIT | GL_DYNAMIC_STORAGE_BIT); //sizeof(data) only works for statically sized C/C++ arrays.
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, ssbo);
}
void updateBuffer() {
float d[] = { data.min, data.max };
glNamedBufferSubData(ssbo, 0, 2 * sizeof(float), &d);
}
In the compute shader, I can access the buffer like this:
layout(std430, binding = 1) buffer bufferIn
{
float min;
float max;
float data[];
};
layout(std430, binding = 1) buffer destBuffer
{
float min;
float max;
float data[];
} outBuffer;
void main() {
screenResolution;
int index = int(gl_WorkGroupID.x + screenResolution.x * gl_WorkGroupID.y);
dvec2 coords = adjustCoords();
dvec4 position = rotatedPosition(coords);
for (i = 0; i < maxIter; i++) {
position = pow2(position);
double length = lengthSQ(position);
if (length > treashold) {
float log_zn = log(float(length)) / 2.0;
float nu = log(log_zn / log(2.0)) / log2;
float iterAdj = 1.0 - nu + float(i);
float scale = iterAdj / float(maxIter);
if (scale < 0)
data[index] = -2;
data[index] = scale;
if (scale > max) max = scale;
if (scale < min && scale > 0) min = scale;
return;
}
}
data[index] = -1;
};
And finally, in the fragment shader, I can read the buffer like this:
layout(std430, binding = 1) buffer bufferIn
{
float min;
float max;
float data[];
};
if (data[index] == -1) {
color = notEscapedColor;
return;
}
float value = (data[index] - min) / (max - min);
if (value < 0) value = 0;
if (value > 1) value = 1;
Here is the code in its entirety.
I'm trying to iterate over a large amount of data in my fragment shader in webgl. I want to pass a lot of data to it and then iterate on each pass of the fragment shader. I'm having some issues doing that though. My ideas were the following:
1. pass the data in uniforms to the frag shader, but I can't send very much data that way.
2. use a buffer to send data as I do verts to the vert shader and then use a varying to send data to the frag shader. unfortunately this seems to involve some issues. (a) varying's interpolate between vectors and I think that'll cause issues with my code (although perhaps this is unavoidable ) (b) more importantly, I don't know how to iterate over the data i pass to my fragment shader. I'm already using a buffer for my 3d point coordinates, but how does webgl handle a second buffer and data coming through it.
* I mean to say, in what order is data fetched from each buffer (my first buffer containing 3d coordinates and the second buffer I'm trying to add)? lastly, as stated above, if i want to iterate over all the data passed for every pass of the fragment shader, how can i do that? *
i've already tried using a uniform array and iterate over that in my fragment shader but i ran into limitations I believe since there is a relatively small size limit for uniforms. I'm currently trying the second method mentioned above.
//pseudo code
vertexCode = `
attribute vec4 3dcoords;
varying vec4 3dcoords;
??? ??? my_special_data;
void main(){...}
`
fragCode = `
varying vec4 3dcoords;
void main(){
...
// perform math operation on 3dcoords for all values in my_special_data variable and store in variable my_results
if( my_results ... ){
gl_FragColor += ...;
}
`
Textures in WebGL are random access 2D arrays of data so you can use them to read lots of data
Example:
const width = 256;
const height = 256;
const vs = `
attribute vec4 position;
void main() {
gl_Position = position;
}
`;
const fs = `
precision highp float;
uniform sampler2D tex;
const int width = ${width};
const int height = ${height};
void main() {
vec4 sums = vec4(0);
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
vec2 xy = (vec2(x, y) + 0.5) / vec2(width, height);
sums += texture2D(tex, xy);
}
}
gl_FragColor = sums;
}
`;
function main() {
const gl = document.createElement('canvas').getContext('webgl');
// check if we can make floating point textures
const ext1 = gl.getExtension('OES_texture_float');
if (!ext1) {
return alert('need OES_texture_float');
}
// check if we can render to floating point textures
const ext2 = gl.getExtension('WEBGL_color_buffer_float');
if (!ext2) {
return alert('need WEBGL_color_buffer_float');
}
// make a 1x1 pixel floating point RGBA texture and attach it to a framebuffer
const framebufferInfo = twgl.createFramebufferInfo(gl, [
{ type: gl.FLOAT, },
], 1, 1);
// make random 256x256 texture
const data = new Uint8Array(width * height * 4);
for (let i = 0; i < data.length; ++i) {
data[i] = Math.random() * 256;
}
const tex = twgl.createTexture(gl, {
src: data,
minMag: gl.NEAREST,
wrap: gl.CLAMP_TO_EDGE,
});
// compile shaders, link, lookup locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// create a buffer and put a 2 unit
// clip space quad in it using 2 triangles
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
},
});
// render to the 1 pixel texture
gl.bindFramebuffer(gl.FRAMEBUFFER, framebufferInfo.framebuffer);
// set the viewport for 1x1 pixels
gl.viewport(0, 0, 1, 1);
gl.useProgram(programInfo.program);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.activeTexture, gl.bindTexture, gl.uniformXXX
twgl.setUniforms(programInfo, {
tex,
});
const offset = 0;
const count = 6;
gl.drawArrays(gl.TRIANGLES, offset, count);
// read the result
const pixels = new Float32Array(4);
gl.readPixels(0, 0, 1, 1, gl.RGBA, gl.FLOAT, pixels);
console.log('webgl sums:', pixels);
const sums = new Float32Array(4);
for (let i = 0; i < data.length; i += 4) {
for (let j = 0; j < 4; ++j) {
sums[j] += data[i + j] / 255;
}
}
console.log('js sums:', sums);
}
main();
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
I have reduced a previous rending problem to a core where I am stuck.
I have a vertex buffer, consisting of 4 vertices, arranged in a plane (labeled 0 to 3):
1. .2
0. .3
and an according index buffer {0,1,2,3,0}.
Now, when I render with D3D11_PRIMITIVE_TOPOLOGY_LINESTRIP, I achieve the expected image:
__
| |
|__|
However, when I render with D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP the result is:
| /|
|/ |
Note that no filling of triangles is performed.
Even more confusing, when using D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST the result is:
|
|
If I change the index buffer to {0,1,2,0,2,3} it renders:
| /
|/
That is, just one pixel line between the first two vertices are being drawn.
I have reduced my shaders to the most primitive examples:
Vertex shader:
struct VertexInputType
{
float4 position : POSITION;
};
struct PixelInputType
{
float4 position : SV_POSITION;
};
PixelInputType VertexShader(VertexInputType input)
{
PixelInputType output;
input.position.w = 1.0f;
output.position = input.position;
return output;
}
Pixel shader:
struct PixelInputType
{
float4 position : SV_POSITION;
};
float4 PixelShader(PixelInputType input) : SV_TARGET
{
float4 color;
color.r = 0;
color.g = 0;
color.b = 0;
color.a = 1;
return color;
}
As vertices I'm using DirectX::XMFLOAT3:
D3D11_INPUT_ELEMENT_DESC polygon_layout[1];
polygon_layout[0].SemanticName = "POSITION";
polygon_layout[0].SemanticIndex = 0;
polygon_layout[0].Format = DXGI_FORMAT_R32G32B32_FLOAT;
polygon_layout[0].InputSlot = 0;
polygon_layout[0].AlignedByteOffset = 0;
polygon_layout[0].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygon_layout[0].InstanceDataStepRate = 0;
d3d11_device->CreateInputLayout(polygon_layout, 1, compiled_vshader_buffer->GetBufferPointer(), compiled_vshader_buffer->GetBufferSize(), &input_layout);
D3D11_BUFFER_DESC vertex_buffer_desc;
vertex_buffer_desc.Usage = D3D11_USAGE_DEFAULT;
vertex_buffer_desc.ByteWidth = sizeof(DirectX::XMFLOAT3) * 4;
vertex_buffer_desc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertex_buffer_desc.CPUAccessFlags = 0;
vertex_buffer_desc.MiscFlags = 0;
vertex_buffer_desc.StructureByteStride = 0;
DirectX::XMFLOAT3 vertices[4];
vertices[0].x = -0.5; vertices[0].y = -0.5; vertices[0].z = 0;
vertices[1].x = -0.5; vertices[1].y = 0.5; vertices[1].z = 0;
vertices[2].x = 0.5; vertices[2].y = 0.5; vertices[2].z = 0;
vertices[3].x = 0.5; vertices[3].y = -0.5; vertices[3].z = 0;
D3D11_SUBRESOURCE_DATA vertex_buffer_data;
vertex_buffer_data.pSysMem = vertices;
vertex_buffer_data.SysMemPitch = 0;
vertex_buffer_data.SysMemSlicePitch = 0;
hr = d3d11_device->CreateBuffer(&vertex_buffer_desc, &vertex_buffer_data, &vertex_buffer);
D3D11_BUFFER_DESC index_buffer_desc;
index_buffer_desc.Usage = D3D11_USAGE_DEFAULT;
index_buffer_desc.ByteWidth = sizeof(int32_t) * 6;
index_buffer_desc.BindFlags = D3D11_BIND_INDEX_BUFFER;
index_buffer_desc.CPUAccessFlags = 0;
index_buffer_desc.MiscFlags = 0;
index_buffer_desc.StructureByteStride = 0;
int32_t indices[6];
indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 2;
indices[4] = 3;
indices[5] = 0;
D3D11_SUBRESOURCE_DATA index_buffer_data;
index_buffer_data.pSysMem = indices;
index_buffer_data.SysMemPitch = 0;
index_buffer_data.SysMemSlicePitch = 0;
hr = d3d11_device->CreateBuffer(&index_buffer_desc, &index_buffer_data, &index_buffer);
// during rendering I set:
unsigned int stride = sizeof(DirectX::XMFLOAT3);
unsigned int offset = 0;
d3d11_context->IASetVertexBuffers(0, 1, &vertex_buffer, &stride, &offset);
d3d11_context->IASetIndexBuffer(index_buffer, DXGI_FORMAT_R32_UINT, 0);
d3d11_context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3d11_context->RSSetState(rasterizer_state);
d3d11_context->IASetInputLayout(input_layout);
d3d11_context->VSSetShader(vertex_shader, NULL, 0);
d3d11_context->PSSetShader(pixel_shader, NULL, 0);
// and render with:
d3d11_context->DrawIndexed(6, 0, 0);
When I look at the shaders with the ID3D11ShaderReflection::GetGSInputPrimitive(), I receive D3D_PRIMITIVE_UNDEFINED for both the vertex shader and pixel shader.
I am setting the rasterizer stage with D3D11_FILL_SOLID and D3D11_CULL_NONE.
Is there any setting or state in the D3D11 context that could explain such a behavior?
I'm happy for any ideas where to look.
Thanks in advance!
Firstly, triangle strip draws exactly what you'd expect - a sequence of triangles. Each index into the triangle index array is combined with the two previous indices to create a triangle.
I'd suggest that as your Triangle List is not divisible by 3 DirectX may be rendering incorrectly (remember that as this is a high-capacity system it skips checks and balances where it can to promote speed).
Try to draw your expected results on paper after reviewing the logic behind each of the draw modes - list, strip, fan etc to be sure that you are using the correct vertex ordering and drawmode.
Good luck!
It turns out that the code was not the problem. Somewhere prior something was changed in the Direct3D state.
Calling context->ClearState(); solved the issue.
I am working on an OpenGL engine and my textures are being rendered weirdly. The textures are mostly full and working, but they have little weird interruptions. Here's what it looks like.
The bottom right corner is what the textures are supposed to look like, there are also randomly colored squares of blue peppered in there. These solid squares (not textured) do not have these interruptions.
I can provide code, but I'm not sure what to show because I've checked everywhere and I don't know where the problem is from.
I am working on a Java and a C++ version. Here is the renderer in Java (If you want to see something else just ask):
public class BatchRenderer2D extends Renderer2D {
private static final int MAX_SPRITES = 60000;
private static final int VERTEX_SIZE = Float.BYTES * 3 + + Float.BYTES * 2 + Float.BYTES * 1 + Float.BYTES * 1;
private static final int SPRITE_SIZE = VERTEX_SIZE * 4;
private static final int BUFFER_SIZE = SPRITE_SIZE * MAX_SPRITES;
private static final int INDICES_SIZE = MAX_SPRITES * 6;
private static final int SHADER_VERTEX_INDEX = 0;
private static final int SHADER_UV_INDEX = 1;
private static final int SHADER_TID_INDEX = 2;
private static final int SHADER_COLOR_INDEX = 3;
private int VAO;
private int VBO;
private IndexBuffer IBO;
private int indexCount;
private FloatBuffer buffer;
private List<Integer> textureSlots = new ArrayList<Integer>();
public BatchRenderer2D() {
init();
}
public void destroy() {
IBO.delete();
glDeleteBuffers(VBO);
glDeleteVertexArrays(VAO);
glDeleteBuffers(VBO);
}
public void init() {
VAO = glGenVertexArrays();
VBO = glGenBuffers();
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, BUFFER_SIZE, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(SHADER_VERTEX_INDEX);
glEnableVertexAttribArray(SHADER_UV_INDEX);
glEnableVertexAttribArray(SHADER_TID_INDEX);
glEnableVertexAttribArray(SHADER_COLOR_INDEX);
glVertexAttribPointer(SHADER_VERTEX_INDEX, 3, GL_FLOAT, false, VERTEX_SIZE, 0);
glVertexAttribPointer(SHADER_UV_INDEX, 2, GL_FLOAT, false, VERTEX_SIZE, 3 * 4);
glVertexAttribPointer(SHADER_TID_INDEX, 1, GL_FLOAT, false, VERTEX_SIZE, 3 * 4 + 2 * 4);
glVertexAttribPointer(SHADER_COLOR_INDEX, 4, GL_UNSIGNED_BYTE, true, VERTEX_SIZE, 3 * 4 + 2 * 4 + 1 * 4);
glBindBuffer(GL_ARRAY_BUFFER, 0);
int[] indices = new int[INDICES_SIZE];
int offset = 0;
for (int i = 0; i < INDICES_SIZE; i += 6) {
indices[ i ] = offset + 0;
indices[i + 1] = offset + 1;
indices[i + 2] = offset + 2;
indices[i + 3] = offset + 2;
indices[i + 4] = offset + 3;
indices[i + 5] = offset + 0;
offset += 4;
}
IBO = new IndexBuffer(indices, INDICES_SIZE);
glBindVertexArray(0);
}
#Override
public void begin() {
glBindBuffer(GL_ARRAY_BUFFER, VBO);
buffer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY).asFloatBuffer();
}
#Override
public void submit(Renderable2D renderable) {
Vector3f position = renderable.getPosition();
Vector2f size = renderable.getSize();
Vector4f color = renderable.getColor();
List<Vector2f> uv = renderable.getUV();
float tid = renderable.getTID();
float c = 0;
float ts = 0.0f;
if (tid > 0) {
boolean found = false;
for(int i = 0; i < textureSlots.size(); i++) {
if(textureSlots.get(i) == tid) {
ts = (float)(i + 1);
found = true;
break;
}
}
if(!found) {
if(textureSlots.size() >= 32) {
end();
flush();
begin();
}
textureSlots.add((int)tid);
ts = (float)textureSlots.size();
}
} else {
int r = (int) (color.x * 255);
int g = (int) (color.y * 255);
int b = (int) (color.z * 255);
int a = (int) (color.w * 255);
c = Float.intBitsToFloat((r << 0) | (g << 8) | (b << 16) | (a << 24));
}
transformationBack.multiply(position).store(buffer);
uv.get(0).store(buffer);
buffer.put(ts);
buffer.put(c);
transformationBack.multiply(new Vector3f(position.x, position.y + size.y, position.z)).store(buffer);
uv.get(1).store(buffer);
buffer.put(ts);
buffer.put(c);
transformationBack.multiply(new Vector3f(position.x + size.x, position.y + size.y, position.z)).store(buffer);
uv.get(2).store(buffer);
buffer.put(ts);
buffer.put(c);
transformationBack.multiply(new Vector3f(position.x + size.x, position.y, position.z)).store(buffer);
uv.get(3).store(buffer);
buffer.put(ts);
buffer.put(c);
indexCount += 6;
}
#Override
public void end() {
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
#Override
public void flush() {
for(int i = 0; i < textureSlots.size(); i++) {
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, textureSlots.get(i));
}
glBindVertexArray(VAO);
IBO.bind();
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, NULL);
IBO.unbind();
glBindVertexArray(0);
indexCount = 0;
}
}
You didn't provide but I'm pretty sure I know the reason (had same problem, following The Cherno tutorial? ;)). Just as information, what is your gpu? (It seems AMD has more problems). Linking my thread for source
Important part:
Fragment Shader:
#version 330 core
if(fs_in.tid > 0.0){
int tid = int(fs_in.tid - 0.5);
texColor = texture(textures[tid], fs_in.uv);
}
What you try to do here is not allowed as per the GLSL 3.30 specification which states
Samplers aggregated into arrays within a shader (using square brackets [ ]) can only be indexed with integral constant expressions (see section 4.3.3 “Constant Expressions”).
Your tid is not a constant, so this will not work.
In GL 4, this constraint has been somewhat relaxed to (quote is from GLSL 4.50 spec):
When aggregated into arrays within a shader, samplers can only be indexed with a dynamically uniform integral expression, otherwise results are undefined.
Your now your input also isn't dynamically uniform either, so you will get undefined results too.
(Thanks derhass)
One "simple" solution(but not pretty and I believe with a small impact on performance):
switch(tid){
case 0: textureColor = texture(textures[0], fs_in.uv); break;
...
case 31: textureColor = texture(textures[31], fs_in.uv); break;
}
Also, as a small note, you're doing a lot of matrix multiplication there for squares, you could simply multiply the first one and then go and add the values, it boosted my performance around 200 fps's (in your example, multiply, then add y, then add x, then subtract y again)
Edit:
Clearly my algebra is not where it should be, what I said you could do(and is now with strike) is completely wrong, sorry
As a project, I have to generate a random NxN rough terrain in modern opengl. For this, I use a height map, rendering each 2xN row with triangle strip.
Shaders are basic, specifying a shade of yellow corresponding to the height(so I can see the bends; I have a top-down camera). Interpolation is on, but for some reason, weird sharp triangular shapes get rendered.
1) They always appear on the right side of the screen.
2) They are bigger than the unit triangle I render.
eg: I don't have the reputation to post images, so...
8x8 http://imgbox.com/flC187WW
128x128 http://i.imgbox.com/f1ebrk0V.png
And here's the code:
void drawMeshRow(int rno, float oy) {
GLfloat meshVert[MESHSIZE * 2 * 3];
for(int i = 0; i < 2 * MESHSIZE; ++i) {
meshVert[3*i] = (i/2)*(2.0/(MESHSIZE-1)) - 1;
if(i & 1) {
meshVert[3*i + 1] = oy;
meshVert[3*i + 2] = heightMap[rno][i/2];
}
else {
meshVert[3*i + 1] = oy + (2.0/(MESHSIZE-1));
meshVert[3*i + 2] = heightMap[rno + 1][i/2];
}
}
glBufferData(GL_ARRAY_BUFFER, 2 * 3 * MESHSIZE * sizeof(GLfloat), meshVert, GL_STREAM_DRAW);
glDrawArrays(GL_TRIANGLE_STRIP, 0, MESHSIZE * 2 * 3);
}
void drawMesh() {
glUseProgram(shader);
glBindBuffer(GL_ARRAY_BUFFER, meshBuff);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
for(int i = 0; i < MESHSIZE - 1; ++i)
drawMeshRow(i, (2.0 / (MESHSIZE - 1)) * i - 1);
glDisableVertexAttribArray(0);
}
drawMesh is called each iteration of the main loop.
Shaders:
Vertex shader
#version 330 core
layout(location = 0) in vec3 pos;
smooth out float height;
void main() {
gl_Position.xyz = pos;
height = pos.z;
gl_Position.w = 1.0;
}
Fragment Shader
#version 330 core
out vec3 pcolor;
smooth in float height;
void main() {
pcolor = vec3(1.0, 1.0, height);
}
You're passing the wrong count to glDrawArrays():
glDrawArrays(GL_TRIANGLE_STRIP, 0, MESHSIZE * 2 * 3);
The last argument is the vertex count, while the value you pass is the total number of coordinates. The correct call is:
glDrawArrays(GL_TRIANGLE_STRIP, 0, MESHSIZE * 2);