Loading generated texture data is inconclusive in Libgdx/Lwjgl - opengl

The following data format:
final int width = 256;
final int height = 256;
final float[][][] data = new float[width][height][4];
FloatBuffer dataBuf;
int textureHandle;
FrameBuffer testFrame;
#Override
public void create () {
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
data[i][j][0] = 0.4f; /* r */
data[i][j][1] = 0.38f; /* g */
data[i][j][2] = 0.2f; /* b */
data[i][j][3] = 0.9f; /* a */
}
}
dataBuf = ByteBuffer.allocateDirect( Float.BYTES * 4 * width * height ).asFloatBuffer();
for (float[][] dat : data) { /* Float Byte size * RGBA * width * height */
for (float[] floats : dat) {
dataBuf.put(floats, 0, 4);
}
}
dataBuf.position(0); /* reset the caret position to the beginning of the array */
textureHandle = Gdx.gl.glGenTexture();
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
Gdx.gl.glBindTexture(GL30.GL_TEXTURE_2D, textureHandle);
Gdx.gl.glTexParameteri(GL30.GL_TEXTURE_2D, GL30.GL_TEXTURE_MIN_FILTER, GL30.GL_NEAREST);
Gdx.gl.glTexParameteri(GL30.GL_TEXTURE_2D, GL30.GL_TEXTURE_MAG_FILTER, GL30.GL_LINEAR);
Gdx.gl.glTexImage2D(
GL30.GL_TEXTURE_2D, 0, GL30.GL_RGBA32F,
width, height, 0, GL30.GL_RGBA, GL30.GL_FLOAT, dataBuf
);
}
The Shaders are behaving correctly, as they were tested with Framebuffer objects, and they display the correct contents of the Framebuffers.
However when the generated texture is rendered, it seems to deviate from the original color/value.
In most cases the values provided by the FloatBuffers result in a black texture, sometimes there is an unexpected color(e.g. lime-green instead of beige).
Unfortunately I couldn't play with glPixelStorei much, because the interface is missing most of its parameters. But in any case glGetError is always returning with 0, so I suspect the data is somehow incorrectly compiled into the dataBuf byte-stream.
What could be the problem here?
Edit: Some debugging details:
glGetError() is always zero
the individual components seem to have a rough idea of the data, but most of the values produce a black texture:
r: 1.0f, g: 1.0f, b: 1.0f, a: 1.0f --> black screen
r: 0.9f, g: 0.9f, b: 0.9f, a: 0.9f --> white screen
r: 0.9f, g: 0.0f, b: 0.0f, a: 0.9f --> red screen
r: 0.0f, g: 0.9f, b: 0.0f, a: 0.9f --> green screen
r: 0.0f, g: 0.0f, b: 0.9f, a: 0.9f --> blue screen
r: 0.5f, g: 0.5f, b: 0.5f, a: 0.5f --> black screen
r: 0.4f, g: 0.32f, b: 0.2f, a: 0.9f --> green screen
The above I suspect is because there is a conversion error between the floating point values in dataBuf and openGL-s GL_FLOAT when uploading the texture
Otherwise the shaders and setup works correctly, as it was tested with a Framebuffer's color attachement and works as expected with all of the above values. The differente there was that the color texture was not generated by hand, but rendered into the framebuffer via glClear(GL_COLOR_BUFFER_BIT);
Using Integer buffers are also working as expected(integer array, values 0-255) :
dataBuf.position(0);
Gdx.gl.glTexImage2D(
GL30.GL_TEXTURE_2D, 0, GL30.GL_RGBA,
width, height, 0, GL30.GL_RGBA, GL30.GL_UNSIGNED_INT, dataBuf
);
Same behavior is present with LWJGL3 backend

I am able to replicate the issue with the given code, and it can be fixed with a one-line change. Your FloatBuffer defaults to big-endian byte order, and it looks like libGDX, LWJGL, and/or OpenGL expect little-endian. I made your example into an executable test case here: https://github.com/yellowstonegames/SquidLib/blob/master/squidlib/src/test/java/squidpony/gdx/tests/issues/Issue6516.java#L37 (the test case doesn't depend on the library as a whole, it just is handy to test libGDX issues in a project that already depends on libGDX and has assets available). My fix is to change:
dataBuf = ByteBuffer.allocateDirect(Float.BYTES * 4 * width * height).asFloatBuffer();
to:
dataBuf = ByteBuffer.allocateDirect(Float.BYTES * 4 * width * height).order(ByteOrder.LITTLE_ENDIAN).asFloatBuffer();
It looks like everything else you had is correct; I get the intended muddy brown color when I draw that texture, instead of lime green.

Related

DirectX9 C++ Recoloring Vertex data in real time

I'm very new to DirectX and I'm starting to get a grasp on how the API functions.
I've managed to get triangles showing and rendering properly using these functions:
Initializing the vertices:
void Menu::InitializeMenu(float x, float y, float width, float height, D3DCOLOR color, IDirect3DDevice9* d3dDevice)
{
CUSTOMVERTEX vertices[] =
{
{ x, y, 0.5f, 1.0f, color },
{ x + width, y, 0.5f, 1.0f, color },
{ x + width, y + height, 0.5f, 1.0f, color },
{ x, y, 0.5f, 1.0f, color },
{ x , y + height, 0.5f, 1.0f, color },
{ x + width, y + height, 0.5f, 1.0f, color },
};
if (FAILED(d3dDevice->CreateVertexBuffer(6 * sizeof(CUSTOMVERTEX), 0, D3DFVF_CUSTOMVERTEX, D3DPOOL_DEFAULT, &m_vertexBuffer, NULL)))
return;
void *locked_buffer;
if (FAILED(m_vertexBuffer->Lock(0, sizeof(vertices), (void **)&locked_buffer, 0)))
return;
memcpy(locked_buffer, vertices, sizeof(vertices));
m_vertexBuffer->Unlock();
}
Everything here is defined within the Menu class.
Drawing:
void Menu::RenderMenu(IDirect3DDevice9 *d3dDevice)
{
d3dDevice->SetRenderState(D3DRS_LIGHTING, FALSE);
d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
d3dDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
d3dDevice->SetRenderState(D3DRS_SRCBLENDALPHA, D3DRS_DESTBLENDALPHA);
d3dDevice->SetStreamSource(0, m_vertexBuffer, 0, sizeof(CUSTOMVERTEX));
d3dDevice->SetFVF(D3DFVF_CUSTOMVERTEX);
d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2);
}
Everything works perfect, I get my two triangles rendered, which in turn produce a semi-transparent quad.
Now the Issue:
I want to be able to change the colors of the vertices in my triangles after the program has started rendering (everything has been initialized already and rendered at least once).
Things I've thought about:
-I've thought about calling the InitializeMenu function with different parameters to reinitialize the vertices with different color, reason I haven't done it is because it seems very inefficient and not practical.
-Materials: I have not implemented materials, this is because I don't know how (yet) and because I'm hoping to find a simpler alternative. All I need is the vertex colors. If materials are the only way to accomplish this, I will implement.
-Shaders: I understand you can color vertices with shaders, but I have very little shader experience, and as stated before I'd rather find a simpler alternative. Yes, I know shaders are simple, I've gotten to the point where I can change the color of vertices in a shader in real time. It was in GLSL but I'm sure it doesn't differ too much. Issue comes when I want to add multiple quads (collection of 2 triangles for ea quad). I only know how to change the color of all vertices coming into the vertex shader. As before though, if shaders is the only way to accomplish, I'll implement. Please just point me in the right direction. I have VERY little understanding on how shaders work on the low level (I understand the concept and flow, just don't know how to use to my advantage to use effectively).
-Research: I've looked everywhere, maybe I'm not asking my question properly, but I cannot find an answer anywhere.
This is actually my first time posting a question, usually someone has already asked my questions. I've tried to explain my problem as best as I could, but if it's still unclear feel free to ask for more code or information.
P.S: I'm using windows 8 desktop, not sure if that really matters.
To update the vertices, you will need to do something similar to InitializeMenu, but without calling CreateVertexBuffer again. You will also need make a slight modification to how you create the vertex buffer.
There are two types of vertex buffers: static and dynamic. A static vertex buffer does not allow access by the CPU to make changes whereas a dynamic vertex buffer does.
To create a dynamic vertex buffer, add D3DUSAGE_DYNAMIC to CreateVertexBuffer:
d3dDevice->CreateVertexBuffer(6 * sizeof(CUSTOMVERTEX), D3DUSAGE_DYNAMIC, D3DFVF_CUSTOMVERTEX, D3DPOOL_DEFAULT, &m_vertexBuffer, NULL)
You can then make a new function like this to change color:
void Menu::ChangeColor(D3DColor color) {
CUSTOMVERTEX *locked_buffer;
if (FAILED(m_vertexBuffer->Lock(0, 0, (void **)&locked_buffer, 0))) {
return;
}
for (int i=0; i<6; i++) {
// use whatever you called color in your CUSTOMVERTEX struct
locked_buffer[i].color = color;
}
m_vertexBuffer->Unlock();
}
This code essentially gets the vertex data from the GPU and allows you to update it on the CPU. You don't need to recreate the vertex buffer, you just update the values you want.
You can define a random function and then add color elements i.e: Green, Blue, Yellow, Red, Purple etc into an array. Then you call the random function to randomly select the colors inside an array.
Like this:
int arr[15] = {10, 210, 140, 180, 250, 189, 183, 107, 183, 107, 60, 2, 55, 85, 48};
D3DCOLOR color1; D3DCOLOR color2; D3DCOLOR color3; D3DCOLOR color4;
srand(time(NULL));
CUSTOMVERTEX vertices[] =
{{ x, y, 0.5f, 1.0f, color = D3DCOLOR_XRGB(arr[rand() % 14 + 0], arr[rand() % 14 + 0], arr[rand() % 14 + 0])},
{ x + width, y, 0.5f, 1.0f, color = D3DCOLOR_XRGB(arr[rand() % 14 + 0], arr[rand() % 14 + 0], arr[rand() % 14 + 0])},
{ x + width, y + height, 0.5f, 1.0f, color = D3DCOLOR_XRGB(arr[rand() % 14 + 0], arr[rand() % 14 + 0], arr[rand() % 14 + 0])},
{ x, y, 0.5f, 1.0f, color = D3DCOLOR_XRGB(arr[rand() % 14 + 0], arr[rand() % 14 + 0], arr[rand() % 14 + 0])},
{ x , y + height, 0.5f, 1.0f, color = D3DCOLOR_XRGB(arr[rand() % 14 + 0], arr[rand() % 14 + 0], arr[rand() % 14 + 0])},
{ x + width, y + height, 0.5f, 1.0f, color = D3DCOLOR_XRGB(arr[rand() % 14 + 0], arr[rand() % 14 + 0], arr[rand() % 14 + 0])},
};
One thing, you might have to use different D3DCOLOR color variables i.e; color1, color2, color3 etc to avoid dereferencing and value overwritting.

Better way to ignore specific colour - Blit

I am given a constantly changing/updated buffer and I need to blit this buffer's pixels to the screen.
For my test code, I read a bitmap and stored it into a buffer.
The thing is, I want to ignore a specific colour when blitting it to the screen using OpenGL.
Currently I use:
glPushMatrix();
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor4f(1.0f, 1.0f, 1.0f, 0.0f);
unsigned char* Data = (unsigned char*)Buffer;
for (int I = Bmp.Height(); I > 0; --I)
{
for (int J = 0; J < Bmp.Width(); ++J)
{
if (Data[0] != 0 && Data[1] != 0 && Data[2] != 0) //If the colour is black, we don't draw it..
{
glRasterPos2i(J, I);
glDrawPixels(1, 1, GL_BGR, GL_UNSIGNED_BYTE, Data);
}
Data += Bmp.Bits() == 32 ? 4 : 3;
if(Bmp.Bits() == 24)
Data += (-Bmp.Width() * 3) & 3;
}
}
glPopMatrix();
SwapBuffers(DC);
Sleep(1);
So in the above, what I have is some Buffer pointer called Data. I then loop through it given a height and width. If the colour is black, I don't draw it.. Otherwise I use glDrawPixels in combination with glRasterPos2i to draw it to the screen one pixel at a time. Is there a more efficient way I can make it draw all pixels except a specific colour? It is a buffer not a texture. I used Bmp as an example.
You can use the Stencil buffer. There are also some ways to do chroma key by using the pixel shader.

DX11 Alpha blending when rendering to a texture

FINAL EDIT:
Resolved... just needed to learn how alpha blending works in-depth. I should have had:
oBlendStateDesc.RenderTarget[a].DestBlendAlpha = D3D11_BLEND_ZERO;
...set to D3D11_BLEND_ONE to preserve the alpha.
When rendering to the backbuffer once the problem would not be noticed as the colours blend normal and that is the final output. When rendering to the texture the same thing applies, just then rendering the texture to the backbuffer the incorrect alpha plays a role in incorrectly blending the texture into the backbuffer.
I then ran into another issue where the alpha seemed to be decreasing. This is because the colour is blended twice, for example...
Source.RBGA = 1.0f, 0.0f, 0.0f, 0.5f
Dest.RGBA = 0.0f, 0.0f, 0.0f, 0.0f
Render into texture...
Result.RGB = Source.RBG * Source.A + Dest.RGB * (1 - Source.A) = 0.5f, 0.0f, 0.0f
Result.A = Source.A * 1 + Dest.A * 1 = 0.5f
Now...
Source.RBGA = 0.5f, 0.0f, 0.0f, 0.5f
Dest.RGBA = 0.0f, 0.0f, 0.0f, 0.0f
Render into backbuffer...
Result.RGB = Source.RBG * Source.A + Dest.RGB * (1 - Source.A) = 0.25f, 0.0f, 0.0f
Result.A = Source.A * 1 + Dest.A * 1 = 0.5f
To resolve this, when rendering the texture into the backbuffer I use the same blendstate but change the SrcBlend to D3D11_BLEND_ONE so the colour is not blended twice.
Hopefully this helps anyone else having a similar problem....
EDITEND
To increase performance I'm attempting to render a string of text that never changes into a texture to save rendering each individual character every time.
Since I'm rendering strictly in 2D, I've disabled the depth & stencil testing while enabling alpha blending.
Problem is there doesn't seem to be any alpha blending happening, whatever is drawn last overwrites the current pixel with its own data... no blending.
I use a single blend state which I do not change. When rendering to the backbuffer the blending works fine. When rendering the final texture to the backbuffer the blending also works fine. It's just when I render to the texture that blending seems to fail.
Here's how I set up my single blend state:
D3D11_BLEND_DESC oBlendStateDesc;
oBlendStateDesc.AlphaToCoverageEnable = 0;
oBlendStateDesc.IndependentBlendEnable = 0; //set to false, dont need loop below... but just incase
for (unsigned int a = 0; a < 8; ++a)
{
oBlendStateDesc.RenderTarget[a].BlendEnable = 1;
oBlendStateDesc.RenderTarget[a].SrcBlend = D3D11_BLEND_SRC_ALPHA;
oBlendStateDesc.RenderTarget[a].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
oBlendStateDesc.RenderTarget[a].BlendOp = D3D11_BLEND_OP_ADD;
oBlendStateDesc.RenderTarget[a].SrcBlendAlpha = D3D11_BLEND_ONE;
oBlendStateDesc.RenderTarget[a].DestBlendAlpha = D3D11_BLEND_ZERO;
oBlendStateDesc.RenderTarget[a].BlendOpAlpha = D3D11_BLEND_OP_ADD;
oBlendStateDesc.RenderTarget[a].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
}
// Create the blend state from the description
HResult = m_poDevice->CreateBlendState(&oBlendStateDesc, &m_poBlendState_Default);
m_poDeviceContext->OMSetBlendState(m_poBlendState_Default, nullptr, 0xffffff);
Are there any extra steps I am missing to enable blending when rendering to a texture?
EDIT: If I set AlphaToCoverageEnable to true it blends, but looks terrible. That at least confirms it is using the same blend state... just works differently depending on when rendering to backbuffer or a texture : / Here's my texture desc...
m_oTexureDesc.Width = a_oDesc.m_uiWidth;
m_oTexureDesc.Height = a_oDesc.m_uiHeight;
m_oTexureDesc.MipLevels = 1;
m_oTexureDesc.ArraySize = 1;
m_oTexureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
m_oTexureDesc.SampleDesc.Count = 1; //No sampling
m_oTexureDesc.SampleDesc.Quality = 0;
m_oTexureDesc.Usage = D3D11_USAGE_DEFAULT; //GPU writes & reads
m_oTexureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
m_oTexureDesc.CPUAccessFlags = 0;
m_oTexureDesc.MiscFlags = 0;
EDIT:
Here's some visualization...
Rendering to backbuffer - AlphaBlending enabled.
Rendering to texture - AlphaBlending enabled.
Rendering to backbuffer - AlphaBlending disabled.
Letter T taken from the font file
*When rendering with AB disabled, the letters match exactly (compare 4 & 3)
*When rendering to the backbuffer with AB enabled, the letters render slightly (hardly noticeable) washed out but still blend (compare 4 & 1)
*When rendering to a texture with AB enabled, the letters render even more noticeably washed out while not blending at all. (compare 4 & 2)
Not sure why the colours are washed out with alpha blending enabled... but maybe its a clue?
EDIT:
If I clear the render target texture to say... 0.0f, 0.0f, 1.0f, 1.0f (RGBA, blue)... this is the result:
Only the pixels with alpha > 0.0f & < 1.0f blend with the colour. Another clue but I have no idea how to resolve this issue...

How can I copy parts of an image from the buffer into a texture to render?

I have been searching around for a simple solution, but I have not found anything. Currently I am loading a texture from a file and rendering it into the buffer using C++ 2012 Express DirectX9. But what I want to do is be able to copy parts of the buffer, and use the part that is copied as the texture, instead of the loaded texture.
I want to be able to copy/select like a map-editor would do.
EDIT: Problem Solves :) It was just dumb mistakes.
You can use the StretchRect function (see documentation).
You should copy a subset of the source buffer into the whole destination buffer (which is the new texture's buffer in your case). Something like this:
LPDIRECT3DTEXTURE9 pTexSrc, // source texture
pTexDst; // new texture (a subset of the source texture)
// create the textures
// ...
LPDIRECT3DSURFACE9 pSrc, pDst;
pTexSrc->GetSurfaceLevel(0, &pSrc);
pTexDst->GetSurfaceLevel(0, &pDst);
RECT rect; // (x0, y0, x1, y1) - coordinates of the subset to copy
rect.left = x0;
rect.right = x1;
rect.top = y0;
rect.bottom = y1;
pd3dDevice->StretchRect(pSrc, &rect, pDst, NULL, D3DTEXF_NONE);
// the last parameter could be also D3DTEXF_POINT or D3DTEXF_LINEAR
pSrc->Release();
pDst->Release(); // remember to release the surfaces when done !!!
EDIT:
OK, I've just got through the tones of your code and I think the best solution would be to use uv coordinates instead of copying subsets of the palette texture. You should calculate the appropriate uv coordinates for a given tile in game_class:: game_gui_add_current_graphic and use them in the CUSTOMVERTEX structure:
float width; // the width of the palette texture
float height; // the height of the palette texture
float tex_x, tex_y; // the coordinates of the upper left corner
// of the palette texture's subset to use for
// the current tile texturing
float tex_w, tex_h; // the width and height of the above mentioned subset
float u0, u1, v0, v1;
u0 = tex_x / width;
v0 = tex_y / height;
u1 = u0 + tex_w / width;
v1 = v0 + tex_h / height;
// create the vertices using the CUSTOMVERTEX struct
CUSTOMVERTEX vertices[] = {
{ 0.0f, 32.0f, 1.0f, u0, v1, D3DCOLOR_XRGB(255, 0, 0), },
{ 0.0f, 0.0f, 1.0f, u0, v0, D3DCOLOR_XRGB(255, 0, 0), },
{ 32.0f, 32.0f, 1.0f, u1, v1, D3DCOLOR_XRGB(0, 0, 255), },
{ 32.0f, 0.0f, 1.0f, u1, v0, D3DCOLOR_XRGB(0, 255, 0), } };
Example: Your palette consists of 3 rows and 4 columns with the 12 possible cell textures. Each texture is 32 x 32. So tex_w = tex_h = 32;, width = 4 * tex_w; and height = 3 * tex_h;. Suppose you want to calculate uv coordinates for a tile which should be textured with the image in the second row and the third column of the palette. Then tex_x = (3-1)*tex_w; and tex_y = (2-1)*tex_h;. Finally, you calculate the UVs as in the code above (in this example you'll get {u0,v0,u1,v1} = {(3-1)/4, (2-1)/3, 3/4, 2/3} = {0.5, 0.33, 0.75, 0.66}).

glDrawElements drawing all objects connected

I can't figure out how to get glDrawElements to not connect everything it draws...
//Draw Reds
glEnableVertexAttribArray(vLoc);
glEnableVertexAttribArray(cLoc);
glBindBuffer(GL_ARRAY_BUFFER,positionBufferRed);
glVertexAttribPointer(vLoc,3,GL_FLOAT,GL_FALSE,0,0);
glBindBuffer(GL_ARRAY_BUFFER,redBuffer);
glVertexAttribPointer(cLoc,3,GL_FLOAT,GL_FALSE,0,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,elementBufferRed);
glDrawElements(GL_TRIANGLES,nElements*3,GL_UNSIGNED_INT,0);
glDisableVertexAttribArray(vLoc);
glDisableVertexAttribArray(cLoc);
//Draw Blues
glEnableVertexAttribArray(vLoc);
glEnableVertexAttribArray(cLoc);
glBindBuffer(GL_ARRAY_BUFFER,positionBufferBlue);
glVertexAttribPointer(vLoc,3,GL_FLOAT,GL_FALSE,0,0);
glBindBuffer(GL_ARRAY_BUFFER,blueBuffer);
glVertexAttribPointer(cLoc,3,GL_FLOAT,GL_FALSE,0,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,elementBufferBlue);
glDrawElements(GL_TRIANGLES,nElements*3,GL_UNSIGNED_INT,0);
glDisableVertexAttribArray(vLoc);
glDisableVertexAttribArray(cLoc);
This is what the result looks like:
http://img338.imageshack.us/img338/2440/cows.png
Should be two separate cows but instead they're connected with black lines. Any advice will be appreciated!
My guess is that the number of elements you're trying to draw is wrong (too big). So the GPU tries to get triangles that don't exist in the buffer, and accidentally access the vertices of the next mesh, but not the color (black).
Try with glDrawElements(GL_TRIANGLES,nElements,GL_UNSIGNED_INT,0);
If it doesn't work, try with a handcoded single triangle.
Here's an example :
GLsizei const TonemapperElementCount = 3;
GLsizeiptr const TonemapperElementSize = TonemapperElementCount * sizeof(glm::uint32);
glm::uint32 const TonemapperElementData[TonemapperElementCount] =
{
0, 1, 2,
};
GLsizei const TonemapperVertexCount = 3;
GLsizeiptr const TonemapperPositionSize = TonemapperVertexCount * sizeof(glm::vec4);
glm::vec4 const TonemapperPositionData[TonemapperVertexCount] =
{ // A full-screen triangle in normalized screen space.
glm::vec4( -1.0f, -1.0f,0,1),
glm::vec4( 3.0f, -1.0f ,0,1),
glm::vec4( -1.0f, 3.0f ,0,1),
};