Loading a texture using OpenGL - c++

I am trying to load a texture via OpenGL for a 2d platformer and the code seems to be crashing on this exact part, but the lack of knowledge in C++ or openGL seems to be my problem, pls help!
bool Texture::LoadTextureFromFile( std::string path )
{
//Texture loading success
bool textureLoaded = false;
//Generate and set current image ID
GLuint imgID = 0;
glGenTextures( 1, &imgID );
glBindTexture(GL_TEXTURE_2D ,imgID );
//Load image
GLboolean success = glLoadTextures( path.c_str() );
//Image loaded successfully
if( success == GL_TRUE )
{
//Convert image to RGBA
// success = ilConvertImage( IL_RGBA, IL_UNSIGNED_BYTE );
if( success == GL_TRUE )
{
//Create texture from file pixels
textureLoaded = LoadTextureFromPixels32
( (GLuint*)glGetDoublev, (GLuint*)glGetIntegerv( GL_TEXTURE_WIDTH ), GLuint*(glGetIntegerv( GL_TEXTURE_HEIGHT )) );
}
//Delete file from memory
glDeleteTextures( 1, &imgID );
}
//Report error
if( !textureLoaded )
{
printf( "Unable to load %s\n", path.c_str() );
}
return textureLoaded;
}

There are a few possible issues.
Firstly, where do you get glLoadTextures() from? And what does it do? That's not part of the OpenGL specification. It's not clear to me what it does. It's possible that this function does the entire texture loading for you, and the code below just screws things up.
Next, your first parameter to LoadTextureFromPixels32() is glGetDoubleV. You're passing it a pointer to the function glGetDoubleV, which is definitely NOT right. I assume that parameter expects a pointer to the loaded image data.
Finally, your code deletes the texture you just created with glDeleteTextures(). That makes no sense. Your texture object's ID is stored in imgID, which is a local variable. You're deleting it so the texture's gone.
The normal procedure for creating a texture is:
Load the texture data (using something like SDL_image, or another image loading library)
Create the texture object using glGenTextures(); NOTE: This is your handle to the texture; you MUST store this for future use
Bind it glBindTexture()
Load in the image data glTexture2D()
That's it.

Related

Applying HLSL Pixel Shaders to Win32 Screen Capture

A little background: I'm attempting to make a Windows (10) application which makes the screen look like an old CRT monitor, scanlines, blur, and all. I'm using this official Microsoft screen capture demo as a starting point: At this stage I can capture a window, and display it back in a new mouse-through window as if it were the original window.
I am attempting to use the CRT-Royale CRT shaders which are generally considered the best CRT shaders; these are available in .cg format. I transpile them with cgc to hlsl, then compile the hlsl files to compiled shader byte code with fxc. I am able to successfully load the compiled shaders and create the pixel shader. I then set the pixel shader in the d3d context. I then attempt to copy the capture surface frame to a pixel shader resource and set the created shaders resource. All of this builds and runs, but I do not see any difference in the output image and am not sure how to proceed. Below is the relevant code. I am not a c++ developer and am making this as a personal project which I plan on open sourcing once I have a primitive working version. Any advice is appreciated, thanks.
SimpleCapture::SimpleCapture(
IDirect3DDevice const& device,
GraphicsCaptureItem const& item)
{
m_item = item;
m_device = device;
// Set up
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
d3dDevice->GetImmediateContext(m_d3dContext.put());
auto size = m_item.Size();
m_swapChain = CreateDXGISwapChain(
d3dDevice,
static_cast<uint32_t>(size.Width),
static_cast<uint32_t>(size.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
2);
// ADDED THIS
HRESULT hr1 = D3DReadFileToBlob(L"crt-royale-first-pass-ps_4_0.fxc", &ps_1_buffer);
HRESULT hr = d3dDevice->CreatePixelShader(
ps_1_buffer->GetBufferPointer(),
ps_1_buffer->GetBufferSize(),
nullptr,
&ps_1
);
m_d3dContext->PSSetShader(
ps_1,
nullptr,
0
);
// END OF ADDED CHANGES
// Create framepool, define pixel format (DXGI_FORMAT_B8G8R8A8_UNORM), and frame size.
m_framePool = Direct3D11CaptureFramePool::Create(
m_device,
DirectXPixelFormat::B8G8R8A8UIntNormalized,
2,
size);
m_session = m_framePool.CreateCaptureSession(m_item);
m_lastSize = size;
m_frameArrived = m_framePool.FrameArrived(auto_revoke, { this, &SimpleCapture::OnFrameArrived });
}
void SimpleCapture::OnFrameArrived(
Direct3D11CaptureFramePool const& sender,
winrt::Windows::Foundation::IInspectable const&)
{
auto newSize = false;
{
auto frame = sender.TryGetNextFrame();
auto frameContentSize = frame.ContentSize();
if (frameContentSize.Width != m_lastSize.Width ||
frameContentSize.Height != m_lastSize.Height)
{
// The thing we have been capturing has changed size.
// We need to resize our swap chain first, then blit the pixels.
// After we do that, retire the frame and then recreate our frame pool.
newSize = true;
m_lastSize = frameContentSize;
m_swapChain->ResizeBuffers(
2,
static_cast<uint32_t>(m_lastSize.Width),
static_cast<uint32_t>(m_lastSize.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
0);
}
{
auto frameSurface = GetDXGIInterfaceFromObject<ID3D11Texture2D>(frame.Surface());
com_ptr<ID3D11Texture2D> backBuffer;
check_hresult(m_swapChain->GetBuffer(0, guid_of<ID3D11Texture2D>(), backBuffer.put_void()));
// ADDED THIS
D3D11_TEXTURE2D_DESC txtDesc = {};
txtDesc.MipLevels = txtDesc.ArraySize = 1;
txtDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
txtDesc.SampleDesc.Count = 1;
txtDesc.Usage = D3D11_USAGE_IMMUTABLE;
txtDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
ID3D11Texture2D *tex;
d3dDevice->CreateTexture2D(&txtDesc, NULL,
&tex);
frameSurface.copy_to(&tex);
d3dDevice->CreateShaderResourceView(
tex,
nullptr,
srv_1
);
auto texture = srv_1;
m_d3dContext->PSSetShaderResources(0, 1, texture);
// END OF ADDED CHANGES
m_d3dContext->CopyResource(backBuffer.get(), frameSurface.get());
}
}
DXGI_PRESENT_PARAMETERS presentParameters = { 0 };
m_swapChain->Present1(1, 0, &presentParameters);
... // Truncated
Shaders define how things are drawn. However, you don't draw anything - you just copy, which is why the shader doesn't do anything.
What you should do is to remove the CopyResource call, and instead draw a full screen quad on the back buffer (Which requires you to create a vertex buffer that you can bind, then set the back buffer as render target, and finally call Draw/DrawIndexed to actually render something, which then will invoke the shader).
Also - since I'm not sure whether you already do this and just stripped it from the shown code - functions like CreatePixelShader don't return HRESULTs just for the fun of it - you should check what is actually returned, because DirectX silently returns most errors and expects you to handle them, instead of crashing your program.

Qt3D texture parameter

I am using Qt3D (5.11), and I am experiencing asserts when I try to set a QParameter value to be used by a custom fragment shader. Here is the section of code that doesn't seem to be working:
auto entity = new Qt3DRender::QEntity( mRootEntity );
auto material = new Qt3DRender::QMaterial( entity );
// Set up the custom geometry and material for the entity, which works
// fine in tests as long as the fragment shader does not use texture mapping
auto image = new Qt3DRender::QTextureImage( entity );
image->setSource( QString( "qrc:/image.png" ) );
auto texture = new Qt3DRender::QTexture2D( entity );
texture->addTextureImage( image );
material->addParameter( new QParameter( "imageTexture", texture, entity ) );
I've included only the bits of code needed for this question:
Is this a valid way to set a simple texture parameter? If not, what am I missing to set a simple image?
Note that qrc:/image.png is a 256x256 image that I have used elsewhere in this project without a problem.
The code compiles fine, but when I run it I get an assert with the following message: ASSERT: "texture" in file texture\textureimage.cpp, line 94
I am using VS2017 on Windows 10 with Qt 5.11.
I stumbled upon the issue. Parenting the QTextureImage to entity leads to the assert. Leaving off the parent completely (effectively setting it to nullptr) or parenting it to texture fixes the issue.
Here is some working code:
auto texture = new Qt3DRender::QTexture2D( entity );
auto image = new Qt3DRender::QTextureImage( texture );
image->setSource( QString( "qrc:/image.png" ) );
texture->addTextureImage( image );
material->addParameter( new QParameter( "imageTexture", texture, entity ) );
This is probably a bug? If someone knows why the QTextureImage cannot be safely parented to entity, please add a comment.

SDL_SetRenderTarget doesn't set the tartget

I am trying to write a C++ lambda that is registered and to be used in Lua using the Sol2 binding. The callback below should create an SDL_Texture, and clear it to a color. A Lua_Texture is just a wrapper for an SDL_Texture, and l_txt.texture is of type SDL_Texture*.
lua.set_function("init_texture",
[render](Lua_Texture &l_txt, int w, int h)
{
// free any previous texture
l_txt.deleteTexture();
l_txt.texture = SDL_CreateTexture(render, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, w, h);
SDL_SetRenderTarget(render, l_txt.texture);
SDL_Texture *target = SDL_GetRenderTarget(render);
assert(l_txt.texture == target);
assert(target == nullptr);
SDL_SetRenderDrawColor(render, 0xFF, 0x22, 0x22, 0xFF);
SDL_RenderClear(render);
});
My problem is that SDL_SetRenderTarget isn't functioning as I'd expect it. I try to set the texture as the target so I can clear it's color, but when I try to draw the texture to the screen it is still blank. The asserts in the above code both fail, and show that the current target texture is not set to the texture I am trying to clear and later use, nor is it Null (which is the expected value if there is no current target texture).
I have used this snippet of code before in just c++ (not as a Lua callback) and it works as intended. Somehow, embedding it in Lua causes the behavior to change. Any help is very much appreciated as I've been pulling my hair out over this for a while, thanks!
I may have an answer for you, but you're not going to like it.
It looks like SDL_GetRenderTarget doesn't work as expected.
I got the exact same problem you have (that's how I found your question), and I could reproduce it reliably using that simple program :
int rendererIndex;
[snipped code : rendererIndex is set to the index of the DX11 renderer]
SDL_Renderer * renderer = SDL_CreateRenderer(pWindow->pWindow, rendererIndex, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_TARGETTEXTURE);
SDL_Texture* rtTexture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, 200, 200);
SDL_SetRenderTarget(renderer, rtTexture);
if(SDL_GetRenderTarget(renderer) != rtTexture)
printf("ERROR.");
This always produces :
ERROR.
The workaround I used it that I'm saving the pointer to the render target texture I'm setting for the renderer and I don't use SDL_GetRenderTarget.
EDIT :
I was curious why I didn't get the correct render target when getting it, and I look through SDL2's source code. I found out why (code snipped for clarity) :
int
SDL_SetRenderTarget(SDL_Renderer *renderer, SDL_Texture *texture)
{
// CODE SNIPPED
/* texture == NULL is valid and means reset the target to the window */
if (texture) {
CHECK_TEXTURE_MAGIC(texture, -1);
if (renderer != texture->renderer) {
return SDL_SetError("Texture was not created with this renderer");
}
if (texture->access != SDL_TEXTUREACCESS_TARGET) {
return SDL_SetError("Texture not created with SDL_TEXTUREACCESS_TARGET");
}
// *** EMPHASIS MINE : This is the problem.
if (texture->native) {
/* Always render to the native texture */
texture = texture->native;
}
}
// CODE SNIPPED
renderer->target = texture;
// CODE SNIPPED
}
SDL_Texture *
SDL_GetRenderTarget(SDL_Renderer *renderer)
{
return renderer->target;
}
In short, the renderer saves the current render target in renderer->target, but not before converting the current texture to it's native form. When we use SDL_GetRenderTarget, we're getting that native texture, which may or may not be different.

SDL2 - Why does SDL_CreateTextureFromSurface() need a renderer*?

This is the syntax of the SDL_CreateTextureFromSurface function:
SDL_Texture* SDL_CreateTextureFromSurface(SDL_Renderer* renderer, SDL_Surface* surface)
However, I'm confused why we need to pass a renderer*? I thought we need a renderer* only when drawing the texture?
You need SDL_Renderer to get information about the applicable constraints:
maximum supported size
pixel format
And probably something more..
In addition to the answer by plaes..
Under the hood, SDL_CreateTextureFromSurface calls SDL_CreateTexture, which itself also needs a Renderer, to create a new texture with the same size as the passed in surface.
Then the the SDL_UpdateTexture function is called on the new created texture to load(copy) the pixel data from the surface you passed in to SDL_CreateTextureFromSurface. If the formats between the passed-in surface differ from what the renderer supports, more logic happens to ensure correct behavior.
The Renderer itself is needed for SDL_CreateTexture because its the GPU that handles and stores textures (most of the time) and the Renderer is supposed to be an abstraction over the GPU.
A surface never needs a Renderer since its loaded in RAM and handled by the CPU.
You can find out more about how these calls work if you look at SDL_render.c from the SDL2 source code.
Here is some code inside SDL_CreateTextureFromSurface:
texture = SDL_CreateTexture(renderer, format, SDL_TEXTUREACCESS_STATIC,
surface->w, surface->h);
if (!texture) {
return NULL;
}
if (format == surface->format->format) {
if (SDL_MUSTLOCK(surface)) {
SDL_LockSurface(surface);
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
SDL_UnlockSurface(surface);
} else {
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
}
}

QGLBuffer::map returns NULL?

I'm trying to use QGLbuffer to display an image.
Sequence is something like:
initializeGL() {
glbuffer= QGLBuffer(QGLBuffer::PixelUnpackBuffer);
glbuffer.create();
glbuffer.bind();
glbuffer.allocate(image_width*image_height*4); // RGBA
glbuffer.release();
}
// Attempting to write an image directly the graphics memory.
// map() should map the texture into the address space and give me an address in the
// to write directly to but always returns NULL
unsigned char* dest = glbuffer.map(QGLBuffer::WriteOnly); FAILS
MyGetImageFunction( dest );
glbuffer.unmap();
paint() {
glbuffer.bind();
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height());
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width(),0);
glTexCoord2i(1,0); glVertex2i(width(),height());
glEnd();
glbuffer.release();
}
There aren't any examples of using GLBuffer in this way, it's pretty new
Edit --- for search here is the working solution -------
// Where glbuffer is defined as
glbuffer= QGLBuffer(QGLBuffer::PixelUnpackBuffer);
// sequence to get a pointer into a PBO, write data to it and copy it to a texture
glbuffer.bind(); // bind before doing anything
unsigned char *dest = (unsigned char*)glbuffer.map(QGLBuffer::WriteOnly);
MyGetImageFunction(dest);
glbuffer.unmap(); // need to unbind before the rest of openGL can access the PBO
glBindTexture(GL_TEXTURE_2D,texture);
// Note 'NULL' because memory is now onboard the card
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, image_width, image_height, glFormatExt, glType, NULL);
glbuffer.release(); // but don't release until finished the copy
// PaintGL function
glBindTexture(GL_TEXTURE_2D,textures);
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height());
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width(),0);
glTexCoord2i(1,0); glVertex2i(width(),height());
glEnd();
You should bind the buffer before mapping it!
In the documentation for QGLBuffer::map:
It is assumed that create() has been called on this buffer and that it has been bound to the current context.
In addition to VJovic's comments, I think you are missing a few points about PBOs:
A pixel unpack buffer does not give you a pointer to the graphics texture. It is a separate piece of memory allocated on the graphics card to which you can write to directly from the CPU.
The buffer can be copied into a texture by a glTexSubImage2D(....., 0) call, with the texture being bound as well, which you do not do. (0 is the offset into the pixel buffer). The copy is needed partly because textures have a different layout than linear pixel buffers.
See this page for a good explanation of PBO usages (I used it a few weeks ago to do async texture upload).
create will return false if the GL implementation does not support buffers, or there is no current QGLContext.
bind returns false if binding was not possible, usually because type() is not supported on this GL implementation.
You are not checking if these two functions passed.
I got the same thing, map returns NULL. When I used the following order it is solved.
bool success = mPixelBuffer->create();
mPixelBuffer->setUsagePattern(QGLBuffer::DynamicDraw);
success = mPixelBuffer->bind();
mPixelBuffer->allocate(sizeof(imageData));
void* ptr =mPixelBuffer->map(QGLBuffer::ReadOnly);