OpenGL: wglGetProcAddress always returns NULL - c++

I am trying to create an OpenGL program which uses shaders. I tried using the following code to load the shader functions, but wglGetProcAddress always returns 0 no matter what I do.
The rest of the program works as normal when not using the shader functions.
HDC g_hdc;
HGLRC g_hrc;
PFNGLATTACHSHADERPROC glpf_attachshader;
PFNGLCOMPILESHADERPROC glpf_compileshader;
PFNGLCREATEPROGRAMPROC glpf_createprogram;
PFNGLCREATESHADERPROC glpf_createshader;
PFNGLDELETEPROGRAMPROC glpf_deleteprogram;
PFNGLDELETESHADERPROC glpf_deleteshader;
PFNGLDETACHSHADERPROC glpf_detachshader;
PFNGLLINKPROGRAMPROC glpf_linkprogram;
PFNGLSHADERSOURCEPROC glpf_shadersource;
PFNGLUSEPROGRAMPROC glpf_useprogram;
void GL_Init(HDC dc)
{
//create pixel format
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
32,
0, 0, 0, 0, 0, 0,
0, 0, 0,
0, 0, 0, 0,
32, 0, 0,
PFD_MAIN_PLANE,
0, 0, 0, 0
};
//choose + set pixel format
int pixfmt = ChoosePixelFormat(dc, &pfd);
if (pixfmt && SetPixelFormat(dc, pixfmt, &pfd))
{
//create GL render context
if (g_hrc = wglCreateContext(dc))
{
g_hdc = dc;
//select GL render context
wglMakeCurrent(dc, g_hrc);
//get function pointers
glpf_attachshader = (PFNGLATTACHSHADERPROC) wglGetProcAddress("glAttachShader");
glpf_compileshader = (PFNGLCOMPILESHADERPROC) wglGetProcAddress("glCompileShader");
glpf_createprogram = (PFNGLCREATEPROGRAMPROC) wglGetProcAddress("glCreateProgram");
glpf_createshader = (PFNGLCREATESHADERPROC) wglGetProcAddress("glCreateShader");
glpf_deleteprogram = (PFNGLDELETEPROGRAMPROC) wglGetProcAddress("glDeleteProgram");
glpf_deleteshader = (PFNGLDELETESHADERPROC) wglGetProcAddress("glDeleteShader");
glpf_detachshader = (PFNGLDETACHSHADERPROC) wglGetProcAddress("glDetachShader");
glpf_linkprogram = (PFNGLLINKPROGRAMPROC) wglGetProcAddress("glLinkProgram");
glpf_shadersource = (PFNGLSHADERSOURCEPROC) wglGetProcAddress("glShaderSource");
glpf_useprogram = (PFNGLUSEPROGRAMPROC) wglGetProcAddress("glUseProgram");
}
}
}
I know this may be a possible duplicate, but on most of the other posts the error was because of simple mistakes (like calling wglGetProcAddress before wglMakeCurrent). I'm in a bit of an unique situation - any help would be appreciated.

You are requesting a 32-bit Z-Buffer in this code. That will probably throw you onto the GDI software rasterizer (which implements exactly 0 extensions). You can use 32-bit depth buffers on modern hardware, but most of the time you cannot do it using the default framebuffer, you have to use FBOs.
I have seen some drivers accept 32-bit depth only to fallback to the closest matching 24-bit pixel format, while others simply refuse to give a hardware pixel format at all (this is probably your case). If this is in fact your problem, a quick investigation of your GL strings (GL_RENDERER, GL_VERSION, GL_VENDOR) should make it obvious.
24-bit Depth + 8-bit Stencil is pretty much universally supported, this is the first depth/stencil size you should try.

You could use glew to do that kind of work for you.
Or maybe you don't want?
EDIT:
You could maybe use glext.h then. Or at least look in it and copy paste what interest you.
I'm not an expert I just remember having that kind of problem and try to remind what was conected to it.

Related

How to use glImportMemoryWin32HandleEXT to share an ID3D11Texture2D KeyedMutex Shared handle with OpenGL?

I am investigating how to do cross-process interop with OpenGL and Direct3D 11 using the EXT_external_objects, EXT_external_objects_win32 and EXT_win32_keyed_mutex OpenGL extensions. My goal is to share a B8G8R8A8_UNORM texture (an external library expects BGRA and I can not change it. What's relevant here is the byte depth of 4) with 1 Mip-level allocated and written to offscreen with D3D11 by one application, and render it with OpenGL in another. Because the texture is being drawn to off-process, I can not use WGL_NV_DX_interop2.
My actual code can be seen here and is written in C# with Silk.NET. For illustration's purpose though, I will describe my problem in psuedo-C(++).
First I create my texture in Process A with D3D11, and obtain a shared handle to it, and send it over to process B.
#define WIDTH 100
#define HEIGHT 100
#define BPP 4 // BGRA8 is 4 bytes per pixel
ID3D11Texture2D *texture;
D3D11_TEXTURE2D_DESC texDesc = {
.Width = WIDTH,
.Height = HEIGHT,
.MipLevels = 1,
.ArraySize = 1,
.Format = DXGI_FORMAT_B8G8R8A8_UNORM,
.SampleDesc = { .Count = 1, .Quality = 0 }
.Usage = USAGE_DEFAULT,
.BindFlags = BIND_SHADER_RESOURCE
.CPUAccessFlags = 0,
.MiscFlags = D3D11_RESOURCE_MISC_SHARED_NTHANDLE | D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX
};
device->CreateTexture2D(&texDesc, NULL, &texture);
HANDLE sharedHandle;
texture->CreateSharedHandle(NULL, DXGI_SHARED_RESOURCE_READ, NULL, &sharedHandle);
SendToProcessB(sharedHandle, pid);
In Process B, I first duplicate the handle to get one that's process-local.
HANDLE localSharedHandle;
HANDLE hProcA = OpenProcess(PROCESS_DUP_HANDLE, false, processAPID);
DuplicateHandle(hProcA, sharedHandle, GetCurrentProcess(), &localSharedHandle, 0, false, DUPLICATE_SAME_ACCESS);
CloseHandle(hProcA)
At this point, I have a valid shared handle to the DXGI resource in localSharedHandle. I have a D3D11 implementation of ProcessB that is able to successfully render the shared texture after opening with OpenSharedResource1. My issue here is OpenGL however.
This is what I am currently doing for OpenGL
GLuint sharedTexture, memObj;
glCreateTextures(GL_TEXTURE_2D, 1, &sharedTexture);
glTextureParameteri(sharedTexture, TEXTURE_TILING_EXT, OPTIMAL_TILING_EXT); // D3D11 side is D3D11_TEXTURE_LAYOUT_UNDEFINED
// Create the memory object handle
glCreateMemoryObjectsEXT(1, &memObj);
// I am not actually sure what the size parameter here is referring to.
// Since the source texture is DX11, there's no way to get the allocation size,
// I make a guess of W * H * BPP
// According to docs for VkExternalMemoryHandleTypeFlagBitsNV, NtHandle Shared Resources use HANDLE_TYPE_D3D11_IMAGE_EXT
glImportMemoryWin32HandleEXT(memObj, WIDTH * HEIGHT * BPP, GL_HANDLE_TYPE_D3D11_IMAGE_EXT, (void*)localSharedHandle);
DBG_GL_CHECK_ERROR(); // GL_NO_ERROR
Checking for errors along the way seems to indicate the import was successful. However I am not able to bind the texture.
if (glAcquireKeyedMutexWin32EXT(memObj, 0, (UINT)-1) {
DBG_GL_CHECK_ERROR(); // GL_NO_ERROR
glTextureStorageMem2D(sharedTexture, 1, GL_RGBA8, WIDTH, HEIGHT, memObj, 0);
DBG_GL_CHECK_ERROR(); // GL_INVALID_VALUE
glReleaseKeyedMutexWin32EXT(memObj, 0);
}
What goes wrong is the call to glTextureStorageMem2D. The shared KeyedMutex is being properly acquired and released. The extension documentation is unclear as to how I'm supposed to properly bind this texture and draw it.
After some more debugging, I managed to get [DebugSeverityHigh] DebugSourceApi: DebugTypeError, id: 1281: GL_INVALID_VALUE error generated. Memory object too small from the Debug context. By dividing my width in half I was able to get some garbled output on the screen.
It turns out the size needed to import the texture was not WIDTH * HEIGHT * BPP, (where BPP = 4 for BGRA in this case), but WIDTH * HEIGHT * BPP * 2. Importing the handle with size WIDTH * HEIGHT * BPP * 2 allows the texture to properly bind and render correctly.

Nv Path Rendering fonts optimal implementation

I'm using NV path rendering having read Getting Started with NV Path Rendering by Mark Kilgard
My implementation is based on the render_font example in the Tiger3DES project in NVidia Graphics Samples.
This implementation seems slower than a normal texture based font solution so I'm wondering is it flawed? NVidia state NV Path rendering is faster than alternatives but I am hitting a performance limit far quicker than I expected.
I have a scene with 1000 'messages'. My FPS is incredibly poor on a Quadro K4200. If I combine the text into a single message there is no performance issue but formatting the messages separately is then impossible. If I reduce the number of messages to 100 I get a decent framerate (200+ unlocked).
Are calls to stencil, coverstroke and coverfill expensive?
Here's a code snippet...
Init FontFace:
/* Create a range of path objects corresponding to Latin-1 character codes. */
m_glyphBase = glGenPathsNV(numChars);
glPathGlyphRangeNV(m_glyphBase,
target,
name.c_str(),
style,
0,
numChars,
GL_USE_MISSING_GLYPH_NV,
pathParamTemplate,
GLfloat(emScale)
);
/* Load base character set for unsupported glyphs. */
glPathGlyphRangeNV(m_glyphBase,
GL_STANDARD_FONT_NAME_NV,
"Sans",
style,
0,
numChars,
GL_USE_MISSING_GLYPH_NV,
pathParamTemplate,
GLfloat(emScale)
);
/* Query font and glyph metrics. */
GLfloat fontData[4];
glGetPathMetricRangeNV(GL_FONT_Y_MIN_BOUNDS_BIT_NV | GL_FONT_Y_MAX_BOUNDS_BIT_NV |
GL_FONT_UNDERLINE_POSITION_BIT_NV | GL_FONT_UNDERLINE_THICKNESS_BIT_NV,
m_glyphBase + ' ',
/*count*/1,
4 * sizeof(GLfloat),
fontData
);
m_yMin = fontData[0];
m_yMax = fontData[1];
m_underlinePosition = fontData[2];
m_underlineThickness = fontData[3];
glGetPathMetricRangeNV(GL_GLYPH_HORIZONTAL_BEARING_ADVANCE_BIT_NV,
m_glyphBase,
numChars,
0, /* stride of zero means sizeof(GLfloat) since 1 bit in mask */
&m_horizontalAdvance[0]
);
Init Message:
glGetPathSpacingNV(GL_ACCUM_ADJACENT_PAIRS_NV,
(GLsizei)message.size(),
GL_UNSIGNED_BYTE,
message.c_str(),
m_font->glyphBase(),
1.0, 1.0,
GL_TRANSLATE_X_NV,
&m_xtranslate[1]
);
/* Total advance is accumulated spacing plus horizontal advance of
the last glyph */
m_totalAdvance = m_xtranslate[m_messageLength - 1] +
m_font->horizontalAdvance(uint32(message[m_messageLength - 1]));
Draw Message:
glStencilStrokePathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
1, ~0U, /* Use all stencil bits */
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
glColor3f(m_colour.r, m_colour.g, m_colour.b);
glCoverStrokePathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
GL_BOUNDING_BOX_OF_BOUNDING_BOXES_NV,
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
glStencilFillPathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
GL_PATH_FILL_MODE_NV,
~0U, /* Use all stencil bits */
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
glCoverFillPathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
GL_BOUNDING_BOX_OF_BOUNDING_BOXES_NV,
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
I located the cause of the slowness and it wasn't related to the above referenced functions. These functions perform very well once the offending code was removed. Full disclosure - I was using std::stack for the matrices used in the scene and calls to push and pop on the stack were expensive. So in answer to the question NVidia path rendering for text is blisteringly fast and stencil, coverstroke and coverfill are inexpensive.

I need some clarification with the concept of depth/stencil buffers in direct3D 11 (c++)

I am following tutorials online to help me create my first game, and so far, i am understanding every concept that Direct3D 11 has to throw at me.
But there's a certain concept that i can't seem to completely grasp yet; the depth/stencil buffers.
I understand that a depth/stencil buffers are used to "compare" the depths of pixels from different objects in a game. If two objects are overlapping each other, then the object that has less depth in the pixels will show up closer to the camera. And you define a depth/stencil buffer by filling out the D3D11_TEXTURE2D_DESC..
But my question is; if i fill out the D3D11_TEXTURE2D_DESC structure, then am i telling directX HOW to compare the pixels of different objects in a game?
If you don't understand my question, please just try to explain the concept of depth/stencil buffers as simple as you can. Also please try to explain what exactly am i defining by filling out the D3D11_TEXTURE2D_DESC structure
Thank you.
When you fill out the D3D11_TEXTURE2D_DESC, you are describing the depth/stencil buffer itself: How large is it, what format does it use, how you want to bind it to the pipeline.
The 'boiler-plate' construction for this is as follows (taken from Direct3D Win32 Game Visual Studio template using the C++ equivalent CD3D11_TEXTURE2D_DESC
CD3D11_TEXTURE2D_DESC depthStencilDesc(depthBufferFormat,
backBufferWidth, backBufferHeight, 1, 1,
D3D11_BIND_DEPTH_STENCIL);
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(&depthStencilDesc, nullptr,
depthStencil.GetAddressOf()));
The depthBufferFormat is determined by what level of precision you want, whether or not you are using a stencil-buffer, and your Direct3D Feature Level. The template uses DXGI_FORMAT_D24_UNORM_S8_UINT by default which works on all feature leaves and provides reasonable precision for depth and an 8-bit stencil. The size must exactly match your color back-buffer.
You bind the depth-stencil buffer to the render pipeline by creating a 'view' for the buffer.
CD3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed(
m_d3dDevice->CreateDepthStencilView(depthStencil.Get(),
&depthStencilViewDesc, m_depthStencilView.ReleaseAndGetAddressOf()));
You then 'clear' the view each frame and then bind the view for rendering:
m_d3dContext->ClearDepthStencilView(m_depthStencilView.Get(),
D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
m_d3dContext->OMSetRenderTargets(1, m_renderTargetView.GetAddressOf(),
m_depthStencilView.Get());
You tell Direct3D how to do the comparison with D3D11_DEPTH_STENCIL_DESC (or the C++ equivalent D3D11_DEPTH_STENCIL_DESC).
The 'default' depth/stencil state is:
DepthEnable = TRUE;
DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
DepthFunc = D3D11_COMPARISON_LESS;
StencilEnable = FALSE;
StencilReadMask = D3D11_DEFAULT_STENCIL_READ_MASK;
StencilWriteMask = D3D11_DEFAULT_STENCIL_WRITE_MASK;
const D3D11_DEPTH_STENCILOP_DESC defaultStencilOp =
{ D3D11_STENCIL_OP_KEEP,
D3D11_STENCIL_OP_KEEP,
D3D11_STENCIL_OP_KEEP,
D3D11_COMPARISON_ALWAYS };
FrontFace = defaultStencilOp;
BackFace = defaultStencilOp;
In the DirectX Tool Kit, we provide three common depth states:
// DepthNone
CD3D11_DEPTH_STENCIL_DESC desc(default);
desc.DepthEnable = FALSE;
desc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO;
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
// DepthDefault
CD3D11_DEPTH_STENCIL_DESC desc(default);
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
// DepthRead
CD3D11_DEPTH_STENCIL_DESC desc(default);
desc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO;
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;

Removal of OpenGL rubber banding artefacts

I'm working with some OpenGL code for scientific visualization and I'm having issues getting its rubber banding working on newer hardware. The code is drawing a "Zoom Window" over an existing scene with one corner of the "Zoom Window" at the stored left-click location, and the other under the mouse as it is moved. On the second left-click the scene zooms into the selected window.
The symptoms I am seeing as the mouse is moved across the scene are:
Rubber banding artefacts appearing which are the lines used to create the "Zoom Window" not being removed from the buffer by the second "RenderLogic" pass (see code below)
I can clearly see the contents of the previous buffer flashing up and disappearing as the buffers are swapped
The above problem doesn't happen on low end hardware such as the integrated graphics on a netbook I have. Also, I can't recall this problem ~5 years ago when this code was written.
Here are the relevant code sections, trimmed down to focus on the relevant OpenGL:
// Called by every mouse move event
// Makes use of current device context (m_hDC) and rendering context (m_hRC)
void MyViewClass::DrawLogic()
{
BOOL bSwapRv = FALSE;
// Make the rendering context current
if (!wglMakeCurrent(m_hDC, m_hRC))
// ... error handling
// Perform the logic rendering
glLogicOp(GL_XOR);
glEnable(GL_COLOR_LOGIC_OP);
// Draws the rectangle on the buffer using XOR op
RenderLogic();
bSwapRv = ::SwapBuffers(m_hDC);
// Removes the rectangle from the buffer via a second pass
RenderLogic();
glDisable(GL_COLOR_LOGIC_OP);
// Release the rendering context
if (!wglMakeCurrent(NULL, NULL))
// ... error handling
}
void MyViewClass::RenderLogic(void)
{
glLineWidth(1.0f);
glColor3f(0.6f,0.6f,0.6f);
glEnable(GL_LINE_STIPPLE);
glLineStipple(1, 0x0F0F);
glBegin(GL_LINE_LOOP);
// Uses custom "Point" class with Coords() method returning double*
// Draw rectangle with corners at clicked location and current location
glVertex2dv(m_pntClickLoc.Coords());
glVertex2d(m_pntCurrLoc.X(), m_pntClickLoc.Y());
glVertex2dv(m_pntCurrLoc.Coords());
glVertex2d(m_pntClickLoc.X(), m_pntCurrLoc.Y());
glEnd();
glDisable(GL_LINE_STIPPLE);
}
// Setup code that might be relevant to the buffer configuration
bool MyViewClass::SetupPixelFormat()
{
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR),
1, // Version number (?)
PFD_DRAW_TO_WINDOW // Format must support window
| PFD_SUPPORT_OPENGL // Format must support OpenGL
| PFD_DOUBLEBUFFER, // Must support double buffering
PFD_TYPE_RGBA, // Request an RGBA format
32, // Select a 32 bit colour depth
0, 0, 0, 0, 0, 0, // Colour bits ignored (?)
8, // Alpha buffer bits
0, // Shift bit ignored (?)
0, // No accumulation buffer
0, 0, 0, 0, // Accumulation bits ignored
16, // 16 bit Z-buffer
0, // No stencil buffer
0, // No accumulation buffer (?)
PFD_MAIN_PLANE, // Main drawing layer
0, // Number of overlay and underlay planes
0, 0, 0 // Layer masks ignored (?)
};
PIXELFORMATDESCRIPTOR chosen_pfd;
memset(&chosen_pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
chosen_pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
// Find the closest match to the pixel format
m_uPixelFormat = ::ChoosePixelFormat(m_hDC, &pfd);
// Make sure a pixel format could be found
if (!m_uPixelFormat)
return false;
::DescribePixelFormat(m_hDC, m_uPixelFormat, sizeof(PIXELFORMATDESCRIPTOR), &chosen_pfd);
// Set the pixel format for the view
::SetPixelFormat(m_hDC, m_uPixelFormat, &chosen_pfd);
return true;
}
Any pointers on how to remove the artefacts will be much appreciated.
#Krom - image below
With OpenGL it's canonical to redraw the whole viewport if just anything changes. Consider this: Modern system draw animates complex scenes at well over 30 FPS.
But I understand, that you may want to avoid this. So the usual approach is to first copy the frontbuffer in a texture, before drawing the first rubberband. Then for each rubberband redraw "reset" the image by drawing a framebuffer filling quad with the texture.
I know I'm posting to a year and half old question but in case anyone else comes across this.
I've had this happen to me myself is because you are trying to remove the lines off of the wrong buffer. For example you draw your rectangle on buffer A call swapBuffer and then try to remove the rectangle off of buffer B. What you would want to do is keep track of 2 "zoom window" rectangles while your doing the drawing one for buffer A and one for buffer B and then keep track of which one is the most recent.
If you're using Vista/7 and Aero, try switching to the Classic theme.

Visual Studio Fallback error - Programming C++

I have some code I am trying to run on my laptop, but it keeps giving a 'FALLBACK' error. I don't know what it is, but it is quite annoying. It should just print 'Hello world!', but it prints it twice and changes the colours a little bit.
The same code is running perfectly on my PC.
I've searched a long time to solve this problem, but couldn't find anything. I hope some people out here can help me?
Here is my code:
// Template, major revision 3
#include "string.h"
#include "surface.h"
#include "stdlib.h"
#include "template.h"
#include "game.h"
using namespace Tmpl8;
void Game::Init()
{
// put your initialization code here; will be executed once
}
void Game::Tick( float a_DT )
{
m_Screen->Clear( 0 );
m_Screen->Print( "hello world", 2, 2, 0xffffff );
m_Screen->Line( 2, 10, 66, 10, 0xffffff );
}
Thanks in advance! :-)
Edit:
It gives an error on this line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Maybe this could help?
Looking at this post from OpenGl Forums and seeing that you're using OpenGL, I may have an idea.
You say that the code works fine on your computer but not on your notebook. There you have a possible hardware (different video cards) and software (different OpenGL version/support).
What may be happening is that the feature you want to use from OpenGL is not supported on your notebook. Also, you are creating a texture without data (the NULL on the last parameter), this will probably give you errors such as buffer overflow.
EDIT:
You may take a look on GLEW. It has a tool called "glewinfo" that looks for all features available on your hardware/driver. It generates a file by the same name on the same path of the executable. For the power of two textures, look for GL_ARB_texture_non_power_of_two.
EDIT 2:
As you said on the comments, without the GL_ARB_texture_non_power_of_two extension, and the texture having size of 640x480, glTexture will give you an error, and all the code that depends on it will likely fail. To fix it, you have to stretch the dimensions of the image to the next power of two. In this case, it would become 1024x512. Remember that the data that you supply to glTexture MUST have these dimensions.
Seeing that the error comes from the line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Here are the reasons why that function could return GL_INVALID_VALUE. Since I can't check it for sure, you'll have to go through this list and make sure which one of them caused this issue.
GL_INVALID_VALUE is generated if level is less than 0.
GL_INVALID_VALUE may be generated if level is greater than log 2 ⁡ max , where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if internalFormat is not 1, 2, 3, 4, or one of the accepted resolution and format symbolic constants.
GL_INVALID_VALUE is generated if width or height is less than 0 or greater than 2 + GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if non-power-of-two textures are not supported and the width or height cannot be represented as 2 k + 2 ⁡ border for some integer value of k.
GL_INVALID_VALUE is generated if border is not 0 or 1.
EDIT: I believe it could be the non-power-of-two texture size that's causing the problem. Rounding your texture size to the nearest power-of-two should probably fix the issue.
EDIT2: To test which of these is causing an issue, let's start with the most common issue; trying to create a texture of non-power-of-two size. Create an image of size 256x256 and call this function with 256 for width and height. If the function still fails I would try putting the level to 0 (keeping the power-of-two size still in place).
BUT DANG you don't have data for your image? It's set as NULL. You need to load the image data into memory and pass it to the function to create the texture. And you aren't doing that. Read how to load images from a file or how to render to texture, whichever is relevant to you.
This is to give you a better answer as a fresh post. First you need this helper function to load a bmp file into memory.
unsigned int LoadTex(string Image)
{
unsigned int Texture;
FILE* img = NULL;
img = fopen(Image.c_str(),"rb");
unsigned long bWidth = 0;
unsigned long bHeight = 0;
DWORD size = 0;
fseek(img,18,SEEK_SET);
fread(&bWidth,4,1,img);
fread(&bHeight,4,1,img);
fseek(img,0,SEEK_END);
size = ftell(img.file) - 54;
unsigned char *data = (unsigned char*)malloc(size);
fseek(img,54,SEEK_SET); // image data
fread(data,size,1,img);
fclose(img);
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bWidth, bHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
if (data)
free(data);
return Texture;
}
Courtesy: Post by b0x in Game Deception.
Then you need to call it in your code likewise:
unsigned int texture = LoadTex("example_tex.bmp");