LWJGL - Shader for a 'manually loaded' NPOT texture - opengl

I am trying to display an npot texture in my lwjgl window. The result is this:
The texture is repeated 4 times, upside down as well as distorted by horizontal lines. Obviously this is not the intended result. Here is what I feel to be relevant source code:
Utility method that loads a texture:
// load the image
BufferedImage image = null;
try {
image = ImageIO.read(new File(path));
}
// exit on error
catch (IOException exception) {
Utility.errorExit(exception);
}
// add the image's data to a bytebuffer
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() * image.getHeight() * 4);
for(int x = 0; x < image.getWidth(); x++) {
for(int y = 0; y < image.getHeight(); y++) {
int pixel = image.getRGB(x, y);
buffer.put((byte) ((pixel >> 16) & 0xFF)); // red
buffer.put((byte) ((pixel >> 8) & 0xFF)); // green
buffer.put((byte) (pixel & 0xFF)); // blue
buffer.put((byte) 0xFF); // alpha
}
}
// flip the buffer
buffer.flip();
// generate and bind the texture
int handle = GL11.glGenTextures();
GL11.glBindTexture(GL31.GL_TEXTURE_RECTANGLE, handle);
//Setup wrap mode
GL11.glTexParameteri(GL31.GL_TEXTURE_RECTANGLE, GL11.GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE);
GL11.glTexParameteri(GL31.GL_TEXTURE_RECTANGLE, GL11.GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE);
//Setup texture scaling filtering
GL11.glTexParameteri(GL31.GL_TEXTURE_RECTANGLE, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL31.GL_TEXTURE_RECTANGLE, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
// set the texture data
GL11.glTexImage2D(GL31.GL_TEXTURE_RECTANGLE, 0, GL11.GL_RGBA8, image.getWidth(), image.getHeight(), 0,
GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, buffer);
// return the handle
return handle;
Utility method to bind the texture to the sampler:
// set the sampler's texture unit
GL20.glUniform1i(samplerLocation, GL13.GL_TEXTURE0 + textureUnit);
// bind the texture to the texture unit
GL13.glActiveTexture(GL13.GL_TEXTURE0 + textureUnit);
GL11.glBindTexture(GL31.GL_TEXTURE_RECTANGLE, textureID);
Fragment shader:
#version 150
#extension GL_ARB_texture_rectangle : enable
uniform sampler2DRect sampler;
in vec2 vTexture;
out vec4 color;
void main()
{
color = texture2DRect(sampler, vTexture);
}
The last piece of information that I feel would be relevant is what my texture coordinates are:
Bottom Left Point: (0, 0)
Top Left Point: (0, 600)
Top Right Point: (800, 600)
Bottom Right Point (800, 0)
I am guessing I am doing multiple things wrong. Post in comments section if you feel there is more information that I could provide. Thanks!
P.S. The reason I say the texture is manually loaded is because I am used to using Slick-Util for loading textures, but I was not able to use it for this certain texture as I hear Slick-Util does not support npot textures.

You're pushing texels to the buffer in the wrong order.
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() * image.getHeight() * 4);
for(int x = 0; x < image.getWidth(); x++) {
for(int y = 0; y < image.getHeight(); y++) {
int pixel = image.getRGB(x, y);
buffer.put((byte) ((pixel >> 16) & 0xFF)); // red
buffer.put((byte) ((pixel >> 8) & 0xFF)); // green
buffer.put((byte) (pixel & 0xFF)); // blue
buffer.put((byte) 0xFF); // alpha
}
}
You are iterating over the height in the inner loop. glTexImage2D expects the data to be scanline based, not column based. So try to swap your x and y loops.

Related

OpenGL: Textures are not aligned correctly

I have a problem with OpenGL (in LWJGL) and Texture Mapping. I'm loading an ARGB image using
public static ByteBuffer toByteArray(BufferedImage image) {
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, image.getWidth(), image.getHeight(), pixels, 0, image.getWidth());
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() * image.getHeight() * 4);
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
int pixel = pixels[y * image.getWidth() + x];
buffer.put((byte) ((pixel >> 16) & 0xFF)); // Red component
buffer.put((byte) ((pixel >> 8) & 0xFF)); // Green component
buffer.put((byte) ((pixel >> 0) & 0xFF)); // Blue component
buffer.put((byte) ((pixel >> 24) & 0xFF)); // Alpha component.
}
}
buffer.flip();
return buffer;
}
I'm uploading the textures using
int[] textureIds = new int[textures.size()];
GL11.glGenTextures(textureIds);
int i = 0;
for (Texture texture : textures.values()) {
int textureId = textureIds[i++];
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureId);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_REPEAT);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_REPEAT);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
GL11.glPixelStorei(GL11.GL_UNPACK_ALIGNMENT, 4);
BufferedImage data = texture.load();
ByteBuffer bytes = Texture.toByteArray(data);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA, data.getWidth(), data.getHeight(), 0, GL11.GL_RGBA,
GL11.GL_UNSIGNED_BYTE, bytes);
// GL30.glGenerateMipmap(GL11.GL_TEXTURE_2D);
texture.setTextureId(textureId);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
}
Unfortunately I'm ending up with something like this:
But actually it should look like this (in blender):
The Texture can be found: Here is the texture
So everything is crooked and somewhat following diagonals. The model is made with blender and therefore has proper texture coordinates. I also managed it to load the model plus texture in another engine. But not in mine. Does anyone have an idea how to fix this?

rgba arrays to OpenGL texture

For the gui for my game, I have a custom texture object that stores the rgba data for a texture. Each GUI element registered by my game adds to the final GUI texture, and then that texture is overlayed onto the framebuffer after post-processing.
I'm having trouble converting my Texture object to an openGL texture.
First I create a 1D int array that goes rgbargbargba... etc.
public int[] toIntArray(){
int[] colors = new int[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = r[x][y];
colors[i+1] = g[x][y];
colors[i+2] = b[x][y];
colors[i+3] = a[x][y];
i += 4;
}
}
return colors;
}
Where r, g, b, and a are jagged int arrays from 0 to 255. Next I create the int buffer and the texture.
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
IntBuffer iBuffer = BufferUtils.createIntBuffer(((width * height)*4));
int[] data = toIntArray();
iBuffer.put(data);
iBuffer.rewind();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_INT, iBuffer);
glBindTexture(GL_TEXTURE_2D, 0);
After that I add a 50x50 red square into the upper left of the texture, and bind the texture to the framebuffer shader and render the fullscreen rect that displays my framebuffer.
frameBuffer.unbind(window.getWidth(), window.getHeight());
postShaderProgram.bind();
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, guiManager.texture()); // this gets the texture id that was created
postShaderProgram.setUniform("gui_texture", 1);
mesh.render();
postShaderProgram.unbind();
And then in my fragment shader, I try displaying the GUI:
#version 330
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D gui_texture;
void main()
{
outColor = texture(gui_texture, Texcoord);
}
But all it outputs is a black window!
I added a red 50x50 rectangle into the upper left corner and verified that it exists, but for some reason it isn't showing in the final output.
That gives me reason to believe that I'm not converting my texture into an opengl texture with glTexImage2D correctly.
Can you see anything I'm doing wrong?
Update 1:
Here I saw them doing a similar thing using a float array, so I tried converting my 0-255 to a 0-1 float array and passing it as the image data like so:
public float[] toFloatArray(){
float[] colors = new float[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = (( r[x][y] * 1.0f) / 255);
colors[i+1] = (( g[x][y] * 1.0f) / 255);
colors[i+2] = (( b[x][y] * 1.0f) / 255);
colors[i+3] = (( a[x][y] * 1.0f) / 255);
i += 4;
}
}
return colors;
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, toFloatArray());
And it works!
I'm going to leave the question open however as I want to learn why the int buffer wasn't working :)
When you specified GL_UNSIGNED_INT as the type of the "Host" data, OpenGL expected 32 bits allocated for each color. Since OpenGL only maps the output colors in the default framebuffer to the range [0.0f, 1.0f], it'll take your input color values (mapped in the range [0, 255]) and divide all of them by the maximum size of an int (about 4.2 Billion) to get the final color displayed on screen. As an exercise, using your original code, set the "clear" color of the screen to white, and see that a black rectangle is getting drawn on screen.
You have two options. The first is to convert the color values to the range specified by GL_UNSIGNED_INT, which means for each color value, multiply them by Math.pow((long)2, 24), and trust that the integer overflow of multiplying by that value will behave correctly (since Java doesn't have unsigned integer types).
The other, far safer option, is to store each 0-255 value in a byte[] object (do not use char. char is 1 byte in C/C++/OpenGL, but is 2 bytes in Java) and specify the type of the elements as GL_UNSIGNED_BYTE.

Wrong depth buffer (to texture) output?

For the SSAO effect I have to generate two textures: normals (in view space) and depth.
I decided to use depth buffer as texture according to Microsoft tutorial (the Reading the Depth-Stencil Buffer as a Texture chapter).
Unfortunately, after rendering I got none information from the depth buffer (the lower image):
I guess it's not right. And what is strange, the depth buffer seems to work (I get the right order of faces etc.).
The depth buffer code:
//create depth stencil texture (depth buffer)
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R24G8_TYPELESS;
descDepth.SampleDesc.Count = antiAliasing.getCount();
descDepth.SampleDesc.Quality = antiAliasing.getQuality();
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
ID3D11Texture2D* depthStencil = NULL;
result = device->CreateTexture2D(&descDepth, NULL, &depthStencil);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil texture.", MOD_GRAPHIC);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
//setup the description of the shader resource view
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_SRV_DIMENSION_TEXTURE2DMS : D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
//create the shader resource view.
ERROR_HANDLE(SUCCEEDED(device->CreateShaderResourceView(depthStencil, &shaderResourceViewDesc, &depthStencilShaderResourceView)),
L"Could not create shader resource view for depth buffer.", MOD_GRAPHIC);
createDepthStencilStates();
//set the depth stencil state.
context->OMSetDepthStencilState(depthStencilState3D, 1);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
// Initialize the depth stencil view.
ZeroMemory(&depthStencilViewDesc, sizeof(depthStencilViewDesc));
// Set up the depth stencil view description.
depthStencilViewDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_DSV_DIMENSION_TEXTURE2DMS : D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
//depthStencilViewDesc.Flags = D3D11_DSV_READ_ONLY_DEPTH;
// Create the depth stencil view.
result = device->CreateDepthStencilView(depthStencil, &depthStencilViewDesc, &depthStencilView);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil view.", MOD_GRAPHIC);
After rendering with first pass, I set the depth stencil as texture resource along with other render targets (color, normals), appending it to array:
ID3D11ShaderResourceView ** textures = new ID3D11ShaderResourceView *[targets.size()+1];
for (unsigned i = 0; i < targets.size(); i++) {
textures[i] = targets[i]->getShaderResourceView();
}
textures[targets.size()] = depthStencilShaderResourceView;
context->PSSetShaderResources(0, targets.size()+1, textures);
Before second pass I call context->OMSetRenderTargets(1, &myRenderTargetView, NULL); to unbind depth buffer (so I can use it as texture).
Then, I render my textures (render targets from first pass + depth buffer) with trivial post-process shader, just for debug purpose (second pass):
Texture2D ColorTexture[3];
SamplerState ObjSamplerState;
float4 main(VS_OUTPUT input) : SV_TARGET0{
float4 Color;
Color = float4(0, 1, 1, 1);
float2 textureCoordinates = input.textureCoordinates.xy * 2;
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y < 0.5f) {
Color = ColorTexture[0].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x > 0.5f && input.textureCoordinates.y < 0.5f) {
textureCoordinates.x -= 0.5f;
Color = ColorTexture[1].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y > 0.5f) { //depth texture
textureCoordinates.y -= 0.5f;
Color = ColorTexture[2].Sample(ObjSamplerState, textureCoordinates);
}
...
It works fine for normals texture. Why it doesn't for depth buffer (as shader resource view)?
As per comments:
The texture was rendered and sampled correctly but the data appeared to be uniformly red due to the data lying between 0.999 and 1.0f.
There are a few things you can do to improve the available depth precision, but the simplest of which is to simply ensure your near and far clip distances are not excessively small/large for the scene you're drawing.
Assuming metres are your unit, a near clip of 0.1 (10cm) and a far clip of 200 (metres) are much more reasonable than 1cm and 20km.
Even so, don't expect to see too many black/dark areas, the non linear nature of a z-buffer is still going to mean most of your depth values are shunted up towards 1. If visualisation of the depth buffer is important then simply rescale the data to the normalised 0-1 range before displaying it.

Better way to ignore specific colour - Blit

I am given a constantly changing/updated buffer and I need to blit this buffer's pixels to the screen.
For my test code, I read a bitmap and stored it into a buffer.
The thing is, I want to ignore a specific colour when blitting it to the screen using OpenGL.
Currently I use:
glPushMatrix();
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor4f(1.0f, 1.0f, 1.0f, 0.0f);
unsigned char* Data = (unsigned char*)Buffer;
for (int I = Bmp.Height(); I > 0; --I)
{
for (int J = 0; J < Bmp.Width(); ++J)
{
if (Data[0] != 0 && Data[1] != 0 && Data[2] != 0) //If the colour is black, we don't draw it..
{
glRasterPos2i(J, I);
glDrawPixels(1, 1, GL_BGR, GL_UNSIGNED_BYTE, Data);
}
Data += Bmp.Bits() == 32 ? 4 : 3;
if(Bmp.Bits() == 24)
Data += (-Bmp.Width() * 3) & 3;
}
}
glPopMatrix();
SwapBuffers(DC);
Sleep(1);
So in the above, what I have is some Buffer pointer called Data. I then loop through it given a height and width. If the colour is black, I don't draw it.. Otherwise I use glDrawPixels in combination with glRasterPos2i to draw it to the screen one pixel at a time. Is there a more efficient way I can make it draw all pixels except a specific colour? It is a buffer not a texture. I used Bmp as an example.
You can use the Stencil buffer. There are also some ways to do chroma key by using the pixel shader.

Missing some colors from PNG texture in DirectX during loading and saving?

I use standard DirectX functions (like CreateTexture2D, D3DX11SaveTextureToFile and D3DX11CreateShaderResourceViewFromFile) to load the PNG image, render it on new created texture and than save to file. All the textures are the power of two sizes.
But during it, I have noticed, that some colors from PNG are a little corrupted (similar but not the same as the colors from the source texture). The same with transparency (it works for 0 and 100% transparency parts, but not for e.g. 34%).
Are there some big color-approximations or I do something wrong? If so, how can I solve it?
Here are these two images (left is source: a little different colors and some gradient transparency on the bottom; right is image after loading first image and render it on the new texture, that was than saved to file):
I don't know what cause that behaviour, maybe the new texture's description:
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
I have tried to change it to DXGI_FORMAT_R32G32B32A32_FLOAT, but the effect was even stranger:
Here is the code for rendering source texture on the new texture:
context->OMSetRenderTargets(1, &renderTargetView, depthStencilView); //to render on new texture instead of the screen
float clearColor[4] = {0.0f, 0.0f, 0.0f, 0.0f}; //red, green, blue, alpha
context->ClearRenderTargetView(renderTargetView, clearColor);
//clear the depth buffer to 1.0 (max depth)
context->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
//rendering
turnZBufferOff();
shader->set(context);
object->render(shader, camera, textureManager, context, 0);
swapChain->Present(0, 0);
And in object->render():
UINT stride;
stride = sizeof(Vertex);
UINT offset = 0;
context->IASetVertexBuffers( 0, 1, &buffers->vertexBuffer, &stride, &offset ); //set vertex buffer
context->IASetIndexBuffer( buffers->indexBuffer, DXGI_FORMAT_R16_UINT, 0 ); //set index buffer
context->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); //set primitive topology
if(textureID){
context->PSSetShaderResources( 0, 1, &textureManager->get(textureID)->texture);
}
ConstantBuffer2DStructure cbPerObj;
cbPerObj.positionAndScale = XMFLOAT4(center.getX(), center.getY(), halfSize.getX(), halfSize.getY());
cbPerObj.textureCoordinates = XMFLOAT4(textureRectToUse[0].getX(), textureRectToUse[0].getY(), textureRectToUse[1].getX(), textureRectToUse[1].getY());
context->UpdateSubresource(constantBuffer, 0, NULL, &cbPerObj, 0, 0);
context->VSSetConstantBuffers(0, 1, &constantBuffer);
context->PSSetConstantBuffers(0, 1, &constantBuffer);
context->DrawIndexed(6, 0, 0);
The shader is very simple:
VS_OUTPUT VS(float4 inPos : POSITION, float2 inTexCoord : TEXCOORD)
{
VS_OUTPUT output;
output.Pos.zw = float2(0.0f, 1.0f);
//inPos(x,y) = {-1,1}
output.Pos.xy = (inPos.xy * positionAndScale.zw) + positionAndScale.xy;
output.TexCoord.xy = inTexCoord.xy * (textureCoordinates.zw - textureCoordinates.xy) + textureCoordinates.xy;
return output;
}
float4 PS(VS_OUTPUT input) : SV_TARGET
{
return ObjTexture.Sample(ObjSamplerState, input.TexCoord);
}
For some optimalisation I parse the sprite's size as the shader's param (it works fine, the size of texture, borders etc. are right).
Did you set blend states around? Alpha will not work by default since default blend is no blend at all.
Here is a standard alpha blend state:
D3D11_BLEND_DESC desc;
desc.AlphaToCoverageEnable=false;
desc.IndependentBlendEnable = false;
for (int i =0; i < 8 ; i++)
{
desc.RenderTarget[i].BlendEnable = true;
desc.RenderTarget[i].BlendOp = D3D11_BLEND_OP::D3D11_BLEND_OP_ADD;
desc.RenderTarget[i].BlendOpAlpha = D3D11_BLEND_OP::D3D11_BLEND_OP_ADD;
desc.RenderTarget[i].DestBlend = D3D11_BLEND::D3D11_BLEND_INV_SRC_ALPHA;
desc.RenderTarget[i].DestBlendAlpha = D3D11_BLEND::D3D11_BLEND_ONE;
desc.RenderTarget[i].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE::D3D11_COLOR_WRITE_ENABLE_ALL;
desc.RenderTarget[i].SrcBlend = D3D11_BLEND::D3D11_BLEND_SRC_ALPHA;
desc.RenderTarget[i].SrcBlendAlpha = D3D11_BLEND::D3D11_BLEND_ONE;
}
ID3D11BlendState* state;
device->CreateBlendState(&desc,&state);
return state;
Also I would use Clear with alpha component set to 1 instead of 0
I'm suggesting that your problems are stemming from importing a layered Fireworks PNG file. Fireworks layered PNGs retain their layers when imported into other softwares like Flash and Freehand. However, in order to have an exact replication of a layered Fireworks PNG in Photoshop, it's necessary to export that layered PNG as a flattened PNG. Thus, opening it in Photoshop and flattening it is not the solution; the solution lies in opening it and flattening it in Fireworks. (Note: PNGs can be 8, 24 or 32-bit...maybe that needs to be accounted for in your analysis.)