Nv Path Rendering fonts optimal implementation - opengl

I'm using NV path rendering having read Getting Started with NV Path Rendering by Mark Kilgard
My implementation is based on the render_font example in the Tiger3DES project in NVidia Graphics Samples.
This implementation seems slower than a normal texture based font solution so I'm wondering is it flawed? NVidia state NV Path rendering is faster than alternatives but I am hitting a performance limit far quicker than I expected.
I have a scene with 1000 'messages'. My FPS is incredibly poor on a Quadro K4200. If I combine the text into a single message there is no performance issue but formatting the messages separately is then impossible. If I reduce the number of messages to 100 I get a decent framerate (200+ unlocked).
Are calls to stencil, coverstroke and coverfill expensive?
Here's a code snippet...
Init FontFace:
/* Create a range of path objects corresponding to Latin-1 character codes. */
m_glyphBase = glGenPathsNV(numChars);
glPathGlyphRangeNV(m_glyphBase,
target,
name.c_str(),
style,
0,
numChars,
GL_USE_MISSING_GLYPH_NV,
pathParamTemplate,
GLfloat(emScale)
);
/* Load base character set for unsupported glyphs. */
glPathGlyphRangeNV(m_glyphBase,
GL_STANDARD_FONT_NAME_NV,
"Sans",
style,
0,
numChars,
GL_USE_MISSING_GLYPH_NV,
pathParamTemplate,
GLfloat(emScale)
);
/* Query font and glyph metrics. */
GLfloat fontData[4];
glGetPathMetricRangeNV(GL_FONT_Y_MIN_BOUNDS_BIT_NV | GL_FONT_Y_MAX_BOUNDS_BIT_NV |
GL_FONT_UNDERLINE_POSITION_BIT_NV | GL_FONT_UNDERLINE_THICKNESS_BIT_NV,
m_glyphBase + ' ',
/*count*/1,
4 * sizeof(GLfloat),
fontData
);
m_yMin = fontData[0];
m_yMax = fontData[1];
m_underlinePosition = fontData[2];
m_underlineThickness = fontData[3];
glGetPathMetricRangeNV(GL_GLYPH_HORIZONTAL_BEARING_ADVANCE_BIT_NV,
m_glyphBase,
numChars,
0, /* stride of zero means sizeof(GLfloat) since 1 bit in mask */
&m_horizontalAdvance[0]
);
Init Message:
glGetPathSpacingNV(GL_ACCUM_ADJACENT_PAIRS_NV,
(GLsizei)message.size(),
GL_UNSIGNED_BYTE,
message.c_str(),
m_font->glyphBase(),
1.0, 1.0,
GL_TRANSLATE_X_NV,
&m_xtranslate[1]
);
/* Total advance is accumulated spacing plus horizontal advance of
the last glyph */
m_totalAdvance = m_xtranslate[m_messageLength - 1] +
m_font->horizontalAdvance(uint32(message[m_messageLength - 1]));
Draw Message:
glStencilStrokePathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
1, ~0U, /* Use all stencil bits */
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
glColor3f(m_colour.r, m_colour.g, m_colour.b);
glCoverStrokePathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
GL_BOUNDING_BOX_OF_BOUNDING_BOXES_NV,
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
glStencilFillPathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
GL_PATH_FILL_MODE_NV,
~0U, /* Use all stencil bits */
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
glCoverFillPathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
GL_BOUNDING_BOX_OF_BOUNDING_BOXES_NV,
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);

I located the cause of the slowness and it wasn't related to the above referenced functions. These functions perform very well once the offending code was removed. Full disclosure - I was using std::stack for the matrices used in the scene and calls to push and pop on the stack were expensive. So in answer to the question NVidia path rendering for text is blisteringly fast and stencil, coverstroke and coverfill are inexpensive.

Related

How to use glImportMemoryWin32HandleEXT to share an ID3D11Texture2D KeyedMutex Shared handle with OpenGL?

I am investigating how to do cross-process interop with OpenGL and Direct3D 11 using the EXT_external_objects, EXT_external_objects_win32 and EXT_win32_keyed_mutex OpenGL extensions. My goal is to share a B8G8R8A8_UNORM texture (an external library expects BGRA and I can not change it. What's relevant here is the byte depth of 4) with 1 Mip-level allocated and written to offscreen with D3D11 by one application, and render it with OpenGL in another. Because the texture is being drawn to off-process, I can not use WGL_NV_DX_interop2.
My actual code can be seen here and is written in C# with Silk.NET. For illustration's purpose though, I will describe my problem in psuedo-C(++).
First I create my texture in Process A with D3D11, and obtain a shared handle to it, and send it over to process B.
#define WIDTH 100
#define HEIGHT 100
#define BPP 4 // BGRA8 is 4 bytes per pixel
ID3D11Texture2D *texture;
D3D11_TEXTURE2D_DESC texDesc = {
.Width = WIDTH,
.Height = HEIGHT,
.MipLevels = 1,
.ArraySize = 1,
.Format = DXGI_FORMAT_B8G8R8A8_UNORM,
.SampleDesc = { .Count = 1, .Quality = 0 }
.Usage = USAGE_DEFAULT,
.BindFlags = BIND_SHADER_RESOURCE
.CPUAccessFlags = 0,
.MiscFlags = D3D11_RESOURCE_MISC_SHARED_NTHANDLE | D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX
};
device->CreateTexture2D(&texDesc, NULL, &texture);
HANDLE sharedHandle;
texture->CreateSharedHandle(NULL, DXGI_SHARED_RESOURCE_READ, NULL, &sharedHandle);
SendToProcessB(sharedHandle, pid);
In Process B, I first duplicate the handle to get one that's process-local.
HANDLE localSharedHandle;
HANDLE hProcA = OpenProcess(PROCESS_DUP_HANDLE, false, processAPID);
DuplicateHandle(hProcA, sharedHandle, GetCurrentProcess(), &localSharedHandle, 0, false, DUPLICATE_SAME_ACCESS);
CloseHandle(hProcA)
At this point, I have a valid shared handle to the DXGI resource in localSharedHandle. I have a D3D11 implementation of ProcessB that is able to successfully render the shared texture after opening with OpenSharedResource1. My issue here is OpenGL however.
This is what I am currently doing for OpenGL
GLuint sharedTexture, memObj;
glCreateTextures(GL_TEXTURE_2D, 1, &sharedTexture);
glTextureParameteri(sharedTexture, TEXTURE_TILING_EXT, OPTIMAL_TILING_EXT); // D3D11 side is D3D11_TEXTURE_LAYOUT_UNDEFINED
// Create the memory object handle
glCreateMemoryObjectsEXT(1, &memObj);
// I am not actually sure what the size parameter here is referring to.
// Since the source texture is DX11, there's no way to get the allocation size,
// I make a guess of W * H * BPP
// According to docs for VkExternalMemoryHandleTypeFlagBitsNV, NtHandle Shared Resources use HANDLE_TYPE_D3D11_IMAGE_EXT
glImportMemoryWin32HandleEXT(memObj, WIDTH * HEIGHT * BPP, GL_HANDLE_TYPE_D3D11_IMAGE_EXT, (void*)localSharedHandle);
DBG_GL_CHECK_ERROR(); // GL_NO_ERROR
Checking for errors along the way seems to indicate the import was successful. However I am not able to bind the texture.
if (glAcquireKeyedMutexWin32EXT(memObj, 0, (UINT)-1) {
DBG_GL_CHECK_ERROR(); // GL_NO_ERROR
glTextureStorageMem2D(sharedTexture, 1, GL_RGBA8, WIDTH, HEIGHT, memObj, 0);
DBG_GL_CHECK_ERROR(); // GL_INVALID_VALUE
glReleaseKeyedMutexWin32EXT(memObj, 0);
}
What goes wrong is the call to glTextureStorageMem2D. The shared KeyedMutex is being properly acquired and released. The extension documentation is unclear as to how I'm supposed to properly bind this texture and draw it.
After some more debugging, I managed to get [DebugSeverityHigh] DebugSourceApi: DebugTypeError, id: 1281: GL_INVALID_VALUE error generated. Memory object too small from the Debug context. By dividing my width in half I was able to get some garbled output on the screen.
It turns out the size needed to import the texture was not WIDTH * HEIGHT * BPP, (where BPP = 4 for BGRA in this case), but WIDTH * HEIGHT * BPP * 2. Importing the handle with size WIDTH * HEIGHT * BPP * 2 allows the texture to properly bind and render correctly.

using OpenGL to assign color to pixels

I have a NxM dimension of array. The data type is double and their values can range from 0.000001 to 1.0. I want to display them using OpenGL with colors in NxM pixels, e.g. 0.0001 ~ 0.0005 will be red, 0.0005 ~ 0.001 will be light red, like a picture with legend for different ranges.
I thought I should use texture for efficiency, but I do not quite understand how to match the value in the array to the texture. Do I first need to define a texture like a legend? How will the value in the array use the color in the texture?
Or should I first create a color lookup table and use glDrawPixels? How to define the color table in this case?
Following the approach posted by #Josef Rissling, I defined a legend, then each pixel gets an index in the legend position. I currently use glDrawPixels(). I suppose each legend position contains R, G, B value. How should I set the glPixelTransfer and glPixelMap()? The code I pasted below give me just a black screen.
GLuint legend_image[1024][3]; // it contains { {0,0,255}, {0,0,254}, ...}
// GL initialization;
glutInit(&c, &argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowSize(width_, height_);
glutCreateWindow("GPU render");
// allocate buffer handle
glGenBuffers(1, &buffer_obj_);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, buffer_obj_);
// allocate GPU memory
glBufferData(GL_PIXEL_UNPACK_BUFFER_ARB, width_ * height_, NULL, GL_DYNAMIC_DRAW_ARB);
// request a CUDA C name for this buffer
CUDA_CALL(cudaGraphicsGLRegisterBuffer(&res_, buffer_obj_, cudaGraphicsMapFlagsNone));
glPixelTransferi(GL_MAP_COLOR, true);
glPixelMapuiv(GL_PIXEL_MAP_I_TO_I, 1024, legend_image[0]);
glutDisplayFunc(draw_func);
glutIdleFunc(idle_func);
glutMainLoop();
void idle_func()
{
// cuda kernel to do calculation, and then convert to pixel legend position which is pointed by dev_ptr.
cudaGraphicsMapResources(1, &res_, 0);
unsigned int* dev_ptr;
size_t size;
cudaGraphicsResourceGetMappedPointer((void**)&dev_ptr, &size, res_);
cuda_kernel(dev_ptr);
cudaGraphicsUnmapResources(1, &res_, 0);
glutPostRedisplay();
}
void draw_func()
{
glDrawPixels(width_, height_, GL_COLOR_INDEX, GL_UNSIGNED_INT, 0);
glutSwapBuffers();
}
// some cleanup code...
You should mention which language and which openGL version you are using...
The effiency depends on what kind of function you use for the mapping, texture lookups are not cheap. Especially if you are not using a texture as array already (than you have to copy the data first).
But for your mapping example:
You can create a legend texture (in which you applied your non-linear color space) that will allow you to map from your value range to color by pixel offset (where the mapped color value lies). The general case would be than for a pseudo shader:
map(value)
{
pixelStartPosition, pixelEndPosition;
pixelRange = pixelEndPosition - pixelStartPosition;
valueNormalizer = 1.0 / (valueMaximum - valueMinimum);
pixelLegendPosition = pixelStartPosition + pixelRange * ( (value-valueMinimum) * valueNormalizer);
return pixelLegendPosition;
}
Say you have a legend texture with 2000 pixels in the range from 0 to 1999 with a value from 0 to 1:
pixelStartPosition=0
pixelEndPosition=1999
pixelRange = pixelEndPosition - pixelStartPosition // 1999
valueNormalizer = 1.0 / (valueMaximum - valueMinimum) // 1.0
pixelLegendPosition = pixelStartPosition + pixelRange * ( (value-valueMinimum) * valueNormalizer)
// 0 + 1999 * ( (value-0) * 1 ) ===> 1999 * value
If you need to transmit the array data to a texture, there are several ways to do so - but it depends on your version/language mainly, but glTexImage2D is a good direction.

Converting truetype font default to pixel size in NV Path

I am writing text module for OpenGL engine using NVidia Path extension (NV Path). The extension allows loading system and external font files using trutype metrics. Now, I need to be able to set a standard font size (in pixels) for the glyphs when rendering the text. By default the loaded glyph has EMscale = 2048. Searching for glyph metrics-to-pixels conversion I have found this:
Converting FUnits to pixels
Values in the em square are converted to values in the pixel
coordinate system by multiplying them by a scale. This scale is:
pointSize * resolution / ( 72 points per inch * units_per_em )
So units_per_em equals 2048, pointSize and resolution are the unknowns I can't resolve.How do I get the resolution value for the viewport width and height to get into this equation? Also, what should be the point size if my input is the pixel size for the font?
I tried to solve this equation with different kind of input but my rendered text gets always smaller (or bigger) than the reference text (AfterEffects).
NV_Path docs refer to FreeType2 metrics. The reference says:
The metrics found in face->glyph->metrics are normally expressed in
26.6 pixel format (i.e., 1/64th of pixels), unless you use the FT_LOAD_NO_SCALE flag when calling FT_Load_Glyph or FT_Load_Char. In
this case, the metrics will be expressed in original font units.
I tried to scale down the text model matrix by 1/64. It approximates to the correct size but still not perfect.
Here is how I currently setup the text rendering in the code:
emScale=2048;
glyphBase = glGenPathsNV(1+numChars);
pathTemplate= ~0;
glPathGlyphRangeNV(glyphBase,GL_SYSTEM_FONT_NAME_NV,
"Verdana",GL_BOLD_BIT_NV,0,numChars,GL_SKIP_MISSING_GLYPH_NV,pathTemplate,emScale);
/* Query font and glyph metrics. */
glGetPathMetricRangeNV(
GL_FONT_Y_MIN_BOUNDS_BIT_NV|
GL_FONT_Y_MAX_BOUNDS_BIT_NV|
GL_FONT_X_MIN_BOUNDS_BIT_NV|
GL_FONT_X_MAX_BOUNDS_BIT_NV|
GL_FONT_UNDERLINE_POSITION_BIT_NV|
GL_FONT_UNDERLINE_THICKNESS_BIT_NV,glyphBase+' ' ,1 ,6*sizeof(GLfloat),font_data);
glGetPathMetricRangeNV(GL_GLYPH_HORIZONTAL_BEARING_ADVANCE_BIT_NV,
glyphBase,numChars,0,horizontalAdvance);
/* Query spacing information for example's message. */
messageLen = strlen(message);
xtranslate =(GLfloat*)malloc(sizeof(GLfloat) *messageLen);
if(!xtranslate){
fprintf(stderr, "%s: malloc of xtranslate failed\n", "Text3D error");
exit(1);
}
xtranslate[0] = 0.0f; /* Initial xtranslate is zero. */
/* Use 100% spacing; use 0.9 for both for 90% spacing. */
GLfloat advanceScale = 1.0f,
kerningScale = 1.0f;
glGetPathSpacingNV(GL_ACCUM_ADJACENT_PAIRS_NV,
(GLsizei)messageLen,GL_UNSIGNED_BYTE,message,glyphBase,
advanceScale,kerningScale,GL_TRANSLATE_X_NV,&xtranslate[1]); /* messageLen-1 accumulated translates are written here. */
const unsigned char *message_ub = (const unsigned char*)message;
totalAdvance = xtranslate[messageLen-1] +
horizontalAdvance[message_ub[messageLen-1]];
xBorder = totalAdvance / messageLen;
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_NOTEQUAL ,0 ,~0);
glStencilOp(GL_KEEP,GL_KEEP,GL_ZERO);
////////// init matrices /////////////
translate(model ,vec3(0));
translate(view ,vec3(0));
float nearF=1 ,farF = 1200;
glViewport(0,0,_viewPortW,_viewPortH);
glMatrixLoadIdentityEXT(GL_PROJECTION);
float aspect_ratio =(float) _viewPortW /(float) _viewPortH;
glMatrixFrustumEXT(GL_PROJECTION ,-aspect_ratio,aspect_ratio,-1 ,1 ,nearF,farF);
model=translate(model,vec3(0.0f,384.0,0.0f));//move up
//scale by 26.6 also doesn't work:
model=scale(model,vec3((1.0f/26.6f),1.0f/26.6f,1.0f/26.6f));
view=lookAt(vec3(0,0,0),vec3(0,0,-1),vec3(0,1,0));
}
The resolution is device dependent and using your equation given as DPI (dots per inch). point_size is the size of the font, choosen by the user in Points. A Point = 1/72 Inch (actually this is the size of a Point as used in PostScript, the actual unit "Point" as used by typesetters is slightly different).
The device resolution can be queried using OS dependent methods. Google for "Display DPI $NAMEOFOPERATINGSYSTEM". Using a size in points this gives you constant physical font sizes independent of the used display device.
Note that when rendering with OpenGL you'll still go through the transformation pipeline which must be accounted for.

Removal of OpenGL rubber banding artefacts

I'm working with some OpenGL code for scientific visualization and I'm having issues getting its rubber banding working on newer hardware. The code is drawing a "Zoom Window" over an existing scene with one corner of the "Zoom Window" at the stored left-click location, and the other under the mouse as it is moved. On the second left-click the scene zooms into the selected window.
The symptoms I am seeing as the mouse is moved across the scene are:
Rubber banding artefacts appearing which are the lines used to create the "Zoom Window" not being removed from the buffer by the second "RenderLogic" pass (see code below)
I can clearly see the contents of the previous buffer flashing up and disappearing as the buffers are swapped
The above problem doesn't happen on low end hardware such as the integrated graphics on a netbook I have. Also, I can't recall this problem ~5 years ago when this code was written.
Here are the relevant code sections, trimmed down to focus on the relevant OpenGL:
// Called by every mouse move event
// Makes use of current device context (m_hDC) and rendering context (m_hRC)
void MyViewClass::DrawLogic()
{
BOOL bSwapRv = FALSE;
// Make the rendering context current
if (!wglMakeCurrent(m_hDC, m_hRC))
// ... error handling
// Perform the logic rendering
glLogicOp(GL_XOR);
glEnable(GL_COLOR_LOGIC_OP);
// Draws the rectangle on the buffer using XOR op
RenderLogic();
bSwapRv = ::SwapBuffers(m_hDC);
// Removes the rectangle from the buffer via a second pass
RenderLogic();
glDisable(GL_COLOR_LOGIC_OP);
// Release the rendering context
if (!wglMakeCurrent(NULL, NULL))
// ... error handling
}
void MyViewClass::RenderLogic(void)
{
glLineWidth(1.0f);
glColor3f(0.6f,0.6f,0.6f);
glEnable(GL_LINE_STIPPLE);
glLineStipple(1, 0x0F0F);
glBegin(GL_LINE_LOOP);
// Uses custom "Point" class with Coords() method returning double*
// Draw rectangle with corners at clicked location and current location
glVertex2dv(m_pntClickLoc.Coords());
glVertex2d(m_pntCurrLoc.X(), m_pntClickLoc.Y());
glVertex2dv(m_pntCurrLoc.Coords());
glVertex2d(m_pntClickLoc.X(), m_pntCurrLoc.Y());
glEnd();
glDisable(GL_LINE_STIPPLE);
}
// Setup code that might be relevant to the buffer configuration
bool MyViewClass::SetupPixelFormat()
{
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR),
1, // Version number (?)
PFD_DRAW_TO_WINDOW // Format must support window
| PFD_SUPPORT_OPENGL // Format must support OpenGL
| PFD_DOUBLEBUFFER, // Must support double buffering
PFD_TYPE_RGBA, // Request an RGBA format
32, // Select a 32 bit colour depth
0, 0, 0, 0, 0, 0, // Colour bits ignored (?)
8, // Alpha buffer bits
0, // Shift bit ignored (?)
0, // No accumulation buffer
0, 0, 0, 0, // Accumulation bits ignored
16, // 16 bit Z-buffer
0, // No stencil buffer
0, // No accumulation buffer (?)
PFD_MAIN_PLANE, // Main drawing layer
0, // Number of overlay and underlay planes
0, 0, 0 // Layer masks ignored (?)
};
PIXELFORMATDESCRIPTOR chosen_pfd;
memset(&chosen_pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
chosen_pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
// Find the closest match to the pixel format
m_uPixelFormat = ::ChoosePixelFormat(m_hDC, &pfd);
// Make sure a pixel format could be found
if (!m_uPixelFormat)
return false;
::DescribePixelFormat(m_hDC, m_uPixelFormat, sizeof(PIXELFORMATDESCRIPTOR), &chosen_pfd);
// Set the pixel format for the view
::SetPixelFormat(m_hDC, m_uPixelFormat, &chosen_pfd);
return true;
}
Any pointers on how to remove the artefacts will be much appreciated.
#Krom - image below
With OpenGL it's canonical to redraw the whole viewport if just anything changes. Consider this: Modern system draw animates complex scenes at well over 30 FPS.
But I understand, that you may want to avoid this. So the usual approach is to first copy the frontbuffer in a texture, before drawing the first rubberband. Then for each rubberband redraw "reset" the image by drawing a framebuffer filling quad with the texture.
I know I'm posting to a year and half old question but in case anyone else comes across this.
I've had this happen to me myself is because you are trying to remove the lines off of the wrong buffer. For example you draw your rectangle on buffer A call swapBuffer and then try to remove the rectangle off of buffer B. What you would want to do is keep track of 2 "zoom window" rectangles while your doing the drawing one for buffer A and one for buffer B and then keep track of which one is the most recent.
If you're using Vista/7 and Aero, try switching to the Classic theme.

Visual Studio Fallback error - Programming C++

I have some code I am trying to run on my laptop, but it keeps giving a 'FALLBACK' error. I don't know what it is, but it is quite annoying. It should just print 'Hello world!', but it prints it twice and changes the colours a little bit.
The same code is running perfectly on my PC.
I've searched a long time to solve this problem, but couldn't find anything. I hope some people out here can help me?
Here is my code:
// Template, major revision 3
#include "string.h"
#include "surface.h"
#include "stdlib.h"
#include "template.h"
#include "game.h"
using namespace Tmpl8;
void Game::Init()
{
// put your initialization code here; will be executed once
}
void Game::Tick( float a_DT )
{
m_Screen->Clear( 0 );
m_Screen->Print( "hello world", 2, 2, 0xffffff );
m_Screen->Line( 2, 10, 66, 10, 0xffffff );
}
Thanks in advance! :-)
Edit:
It gives an error on this line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Maybe this could help?
Looking at this post from OpenGl Forums and seeing that you're using OpenGL, I may have an idea.
You say that the code works fine on your computer but not on your notebook. There you have a possible hardware (different video cards) and software (different OpenGL version/support).
What may be happening is that the feature you want to use from OpenGL is not supported on your notebook. Also, you are creating a texture without data (the NULL on the last parameter), this will probably give you errors such as buffer overflow.
EDIT:
You may take a look on GLEW. It has a tool called "glewinfo" that looks for all features available on your hardware/driver. It generates a file by the same name on the same path of the executable. For the power of two textures, look for GL_ARB_texture_non_power_of_two.
EDIT 2:
As you said on the comments, without the GL_ARB_texture_non_power_of_two extension, and the texture having size of 640x480, glTexture will give you an error, and all the code that depends on it will likely fail. To fix it, you have to stretch the dimensions of the image to the next power of two. In this case, it would become 1024x512. Remember that the data that you supply to glTexture MUST have these dimensions.
Seeing that the error comes from the line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Here are the reasons why that function could return GL_INVALID_VALUE. Since I can't check it for sure, you'll have to go through this list and make sure which one of them caused this issue.
GL_INVALID_VALUE is generated if level is less than 0.
GL_INVALID_VALUE may be generated if level is greater than log 2 ⁡ max , where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if internalFormat is not 1, 2, 3, 4, or one of the accepted resolution and format symbolic constants.
GL_INVALID_VALUE is generated if width or height is less than 0 or greater than 2 + GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if non-power-of-two textures are not supported and the width or height cannot be represented as 2 k + 2 ⁡ border for some integer value of k.
GL_INVALID_VALUE is generated if border is not 0 or 1.
EDIT: I believe it could be the non-power-of-two texture size that's causing the problem. Rounding your texture size to the nearest power-of-two should probably fix the issue.
EDIT2: To test which of these is causing an issue, let's start with the most common issue; trying to create a texture of non-power-of-two size. Create an image of size 256x256 and call this function with 256 for width and height. If the function still fails I would try putting the level to 0 (keeping the power-of-two size still in place).
BUT DANG you don't have data for your image? It's set as NULL. You need to load the image data into memory and pass it to the function to create the texture. And you aren't doing that. Read how to load images from a file or how to render to texture, whichever is relevant to you.
This is to give you a better answer as a fresh post. First you need this helper function to load a bmp file into memory.
unsigned int LoadTex(string Image)
{
unsigned int Texture;
FILE* img = NULL;
img = fopen(Image.c_str(),"rb");
unsigned long bWidth = 0;
unsigned long bHeight = 0;
DWORD size = 0;
fseek(img,18,SEEK_SET);
fread(&bWidth,4,1,img);
fread(&bHeight,4,1,img);
fseek(img,0,SEEK_END);
size = ftell(img.file) - 54;
unsigned char *data = (unsigned char*)malloc(size);
fseek(img,54,SEEK_SET); // image data
fread(data,size,1,img);
fclose(img);
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bWidth, bHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
if (data)
free(data);
return Texture;
}
Courtesy: Post by b0x in Game Deception.
Then you need to call it in your code likewise:
unsigned int texture = LoadTex("example_tex.bmp");