Visual Studio Fallback error - Programming C++ - c++

I have some code I am trying to run on my laptop, but it keeps giving a 'FALLBACK' error. I don't know what it is, but it is quite annoying. It should just print 'Hello world!', but it prints it twice and changes the colours a little bit.
The same code is running perfectly on my PC.
I've searched a long time to solve this problem, but couldn't find anything. I hope some people out here can help me?
Here is my code:
// Template, major revision 3
#include "string.h"
#include "surface.h"
#include "stdlib.h"
#include "template.h"
#include "game.h"
using namespace Tmpl8;
void Game::Init()
{
// put your initialization code here; will be executed once
}
void Game::Tick( float a_DT )
{
m_Screen->Clear( 0 );
m_Screen->Print( "hello world", 2, 2, 0xffffff );
m_Screen->Line( 2, 10, 66, 10, 0xffffff );
}
Thanks in advance! :-)
Edit:
It gives an error on this line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Maybe this could help?

Looking at this post from OpenGl Forums and seeing that you're using OpenGL, I may have an idea.
You say that the code works fine on your computer but not on your notebook. There you have a possible hardware (different video cards) and software (different OpenGL version/support).
What may be happening is that the feature you want to use from OpenGL is not supported on your notebook. Also, you are creating a texture without data (the NULL on the last parameter), this will probably give you errors such as buffer overflow.
EDIT:
You may take a look on GLEW. It has a tool called "glewinfo" that looks for all features available on your hardware/driver. It generates a file by the same name on the same path of the executable. For the power of two textures, look for GL_ARB_texture_non_power_of_two.
EDIT 2:
As you said on the comments, without the GL_ARB_texture_non_power_of_two extension, and the texture having size of 640x480, glTexture will give you an error, and all the code that depends on it will likely fail. To fix it, you have to stretch the dimensions of the image to the next power of two. In this case, it would become 1024x512. Remember that the data that you supply to glTexture MUST have these dimensions.

Seeing that the error comes from the line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Here are the reasons why that function could return GL_INVALID_VALUE. Since I can't check it for sure, you'll have to go through this list and make sure which one of them caused this issue.
GL_INVALID_VALUE is generated if level is less than 0.
GL_INVALID_VALUE may be generated if level is greater than log 2 ⁡ max , where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if internalFormat is not 1, 2, 3, 4, or one of the accepted resolution and format symbolic constants.
GL_INVALID_VALUE is generated if width or height is less than 0 or greater than 2 + GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if non-power-of-two textures are not supported and the width or height cannot be represented as 2 k + 2 ⁡ border for some integer value of k.
GL_INVALID_VALUE is generated if border is not 0 or 1.
EDIT: I believe it could be the non-power-of-two texture size that's causing the problem. Rounding your texture size to the nearest power-of-two should probably fix the issue.
EDIT2: To test which of these is causing an issue, let's start with the most common issue; trying to create a texture of non-power-of-two size. Create an image of size 256x256 and call this function with 256 for width and height. If the function still fails I would try putting the level to 0 (keeping the power-of-two size still in place).
BUT DANG you don't have data for your image? It's set as NULL. You need to load the image data into memory and pass it to the function to create the texture. And you aren't doing that. Read how to load images from a file or how to render to texture, whichever is relevant to you.

This is to give you a better answer as a fresh post. First you need this helper function to load a bmp file into memory.
unsigned int LoadTex(string Image)
{
unsigned int Texture;
FILE* img = NULL;
img = fopen(Image.c_str(),"rb");
unsigned long bWidth = 0;
unsigned long bHeight = 0;
DWORD size = 0;
fseek(img,18,SEEK_SET);
fread(&bWidth,4,1,img);
fread(&bHeight,4,1,img);
fseek(img,0,SEEK_END);
size = ftell(img.file) - 54;
unsigned char *data = (unsigned char*)malloc(size);
fseek(img,54,SEEK_SET); // image data
fread(data,size,1,img);
fclose(img);
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bWidth, bHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
if (data)
free(data);
return Texture;
}
Courtesy: Post by b0x in Game Deception.
Then you need to call it in your code likewise:
unsigned int texture = LoadTex("example_tex.bmp");

Related

Nv Path Rendering fonts optimal implementation

I'm using NV path rendering having read Getting Started with NV Path Rendering by Mark Kilgard
My implementation is based on the render_font example in the Tiger3DES project in NVidia Graphics Samples.
This implementation seems slower than a normal texture based font solution so I'm wondering is it flawed? NVidia state NV Path rendering is faster than alternatives but I am hitting a performance limit far quicker than I expected.
I have a scene with 1000 'messages'. My FPS is incredibly poor on a Quadro K4200. If I combine the text into a single message there is no performance issue but formatting the messages separately is then impossible. If I reduce the number of messages to 100 I get a decent framerate (200+ unlocked).
Are calls to stencil, coverstroke and coverfill expensive?
Here's a code snippet...
Init FontFace:
/* Create a range of path objects corresponding to Latin-1 character codes. */
m_glyphBase = glGenPathsNV(numChars);
glPathGlyphRangeNV(m_glyphBase,
target,
name.c_str(),
style,
0,
numChars,
GL_USE_MISSING_GLYPH_NV,
pathParamTemplate,
GLfloat(emScale)
);
/* Load base character set for unsupported glyphs. */
glPathGlyphRangeNV(m_glyphBase,
GL_STANDARD_FONT_NAME_NV,
"Sans",
style,
0,
numChars,
GL_USE_MISSING_GLYPH_NV,
pathParamTemplate,
GLfloat(emScale)
);
/* Query font and glyph metrics. */
GLfloat fontData[4];
glGetPathMetricRangeNV(GL_FONT_Y_MIN_BOUNDS_BIT_NV | GL_FONT_Y_MAX_BOUNDS_BIT_NV |
GL_FONT_UNDERLINE_POSITION_BIT_NV | GL_FONT_UNDERLINE_THICKNESS_BIT_NV,
m_glyphBase + ' ',
/*count*/1,
4 * sizeof(GLfloat),
fontData
);
m_yMin = fontData[0];
m_yMax = fontData[1];
m_underlinePosition = fontData[2];
m_underlineThickness = fontData[3];
glGetPathMetricRangeNV(GL_GLYPH_HORIZONTAL_BEARING_ADVANCE_BIT_NV,
m_glyphBase,
numChars,
0, /* stride of zero means sizeof(GLfloat) since 1 bit in mask */
&m_horizontalAdvance[0]
);
Init Message:
glGetPathSpacingNV(GL_ACCUM_ADJACENT_PAIRS_NV,
(GLsizei)message.size(),
GL_UNSIGNED_BYTE,
message.c_str(),
m_font->glyphBase(),
1.0, 1.0,
GL_TRANSLATE_X_NV,
&m_xtranslate[1]
);
/* Total advance is accumulated spacing plus horizontal advance of
the last glyph */
m_totalAdvance = m_xtranslate[m_messageLength - 1] +
m_font->horizontalAdvance(uint32(message[m_messageLength - 1]));
Draw Message:
glStencilStrokePathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
1, ~0U, /* Use all stencil bits */
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
glColor3f(m_colour.r, m_colour.g, m_colour.b);
glCoverStrokePathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
GL_BOUNDING_BOX_OF_BOUNDING_BOXES_NV,
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
glStencilFillPathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
GL_PATH_FILL_MODE_NV,
~0U, /* Use all stencil bits */
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
glCoverFillPathInstancedNV((GLsizei)m_messageLength,
GL_UNSIGNED_BYTE,
message().c_str(),
font()->glyphBase(),
GL_BOUNDING_BOX_OF_BOUNDING_BOXES_NV,
GL_TRANSLATE_X_NV,
&m_xtranslate[0]
);
I located the cause of the slowness and it wasn't related to the above referenced functions. These functions perform very well once the offending code was removed. Full disclosure - I was using std::stack for the matrices used in the scene and calls to push and pop on the stack were expensive. So in answer to the question NVidia path rendering for text is blisteringly fast and stencil, coverstroke and coverfill are inexpensive.

Bind CUDA output array/surface to GL texture in ManagedCUDA

I'm currently attempting to connect some form of output from a CUDA program to a GL_TEXTURE_2D for use in rendering. I'm not that worried about the output type from CUDA (whether it'd be an array or surface, I can adapt the program to that).
So the question is, how would I do that? (my current code copies the output array to system memory, and uploads it to the GPU again with GL.TexImage2D, which is obviously highly inefficient - when I disable those two pieces of code, it goes from approximately 300 kernel executions per second to a whopping 400)
I already have a little bit of test code, to at least bind a GL texture to CUDA, but I'm not even able to get the device pointer from it...
ctx = CudaContext.CreateOpenGLContext(CudaContext.GetMaxGflopsDeviceId(), CUCtxFlags.SchedAuto);
uint textureID = (uint)GL.GenTexture(); //create a texture in GL
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, width, height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Rgba, PixelType.UnsignedByte, null); //allocate memory for the texture in GL
CudaOpenGLImageInteropResource resultImage = new CudaOpenGLImageInteropResource(textureID, CUGraphicsRegisterFlags.WriteDiscard, CudaOpenGLImageInteropResource.OpenGLImageTarget.GL_TEXTURE_2D, CUGraphicsMapResourceFlags.WriteDiscard); //using writediscard because the CUDA kernel will only write to this texture
//then, as far as I understood the ManagedCuda example, I have to do the following when I call my kernel
//(done without a CudaGraphicsInteropResourceCollection because I only have one item)
resultImage.Map();
var ptr = resultImage.GetMappedPointer(); //this crashes
kernelSample.Run(ptr); //pass the pointer to the kernel so it knows where to write
resultImage.UnMap();
The following exception is thrown when attempting to get the pointer:
ErrorNotMappedAsPointer: This indicates that a mapped resource is not available for access as a pointer.
What do I need to do to fix this?
And even if this exception can be resolved, how would I solve the other part of my question; that is, how do I work with the acquired pointer in my kernel? Can I use a surface for that? Access it as an arbitrary array (pointer arithmetic)?
Edit:
Looking at this example, apparently I don't even need to map the resource every time I call the kernel, and call the render function. But how would this translate to ManangedCUDA?
Thanks to the example I found, I was able to translate that to ManagedCUDA (after browsing the source code and fiddling around), and I'm happy to announce that this does really improve my samples per second from about 300 to 400 :)
Apparently it is needed to use a 3D array (I haven't seen any overloads in ManagedCUDA using 2D arrays) but that doesn't really matter - I just use a 3D array/texture which is exactly 1 deep.
id = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture3D, id);
GL.TexParameter(TextureTarget.Texture3D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture3D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
GL.TexImage3D(TextureTarget.Texture3D, 0, PixelInternalFormat.Rgba, width, height, 1, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, IntPtr.Zero); //allocate memory for the texture but do not upload anything
CudaOpenGLImageInteropResource resultImage = new CudaOpenGLImageInteropResource((uint)id, CUGraphicsRegisterFlags.SurfaceLDST, CudaOpenGLImageInteropResource.OpenGLImageTarget.GL_TEXTURE_3D, CUGraphicsMapResourceFlags.WriteDiscard);
resultImage.Map();
CudaArray3D mappedArray = resultImage.GetMappedArray3D(0, 0);
resultImage.UnMap();
CudaSurface surfaceResult = new CudaSurface(kernelSample, "outputSurface", CUSurfRefSetFlags.None, mappedArray); //nothing needs to be done anymore - this call connects the 3D array from the GL texture to a surface reference in the kernel
Kernel code:
surface outputSurface;
__global__ void Sample() {
...
surf3Dwrite(output, outputSurface, pixelX, pixelY, 0);
}

Strange Texture Behavior C++ OpenGL

I'm currently taking a C++ Game Libraries class, and for this class it's been our quarter long project to build a renderer that supports a variety of things. For the current lab our instructor gave us a tutorial on loading in a bmp into OpenGL manually, and applying it to our geometries.
Tutorial: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/
After following this tutorial step by step my textures are having some interesting behaviors. I've gone to other classmates, upperclassmen, and multiple instructors. None of them have an idea of what is happening. Considering that practically every one's code is identical for this Lab, and I'm the only one having this problem, I can't help but be confused.
I'm using the following OBJ, and texture. I convert the OBJ into a binary file in an OBJ converter that I built myself. My renderer takes in this binary file and sends down the data to OpenGL vertex buffers.
OBJ and Texture: http://www.models-resource.com/playstation_2/ratchetclank/model/6662/
My friend and I have the same binary file structure, so I gave him a copy of my binary file to check if the UVs were correct. His renders a perfectly textured chicken, while mind renders a chicken that looks like the texture was squished horizontally to 1/16th the length of the model, then repeated a bunch of times. I would post images, but I'm new here and don't have enough reputation to do so. Over the weekend I'l do my best to increase my reputation, because I really think that it would help to visually see my problem.
I would post my source code, however this project is approaching about 16,000 lines of code, and I doubt anyone is willing to search through that to find a stranger's problem.
Any suggestions would be helpful, I'm primarily curious on common mistakes that can be made when working with OpenGL textures, or .bmps in general.
Thank Ya.
//-----Edit One-----//
My friend's result
My result
I'm afraid that I'm not allowed to use other libraries. I probably should have mentioned that in my initial post.
Here is the code where I am loading in the bmp, I heard from one of the upper class-man at my school that I was ignoring something called bit depth. I know that the tutorial is pretty bad, and I'd rather learn to do it right, than to just barely scrape by. If any one has a good resource on this subject, I would greatly appreciate being pointed in that direction.
unsigned char header[54];
unsigned int dataPos;
unsigned int width, height;
unsigned int imageSize;
unsigned char * data;
FILE * file = fopen(filePath, "rb");
if (!file)
{
printf("\nImage could not be opened");
exit(1);
}
if (fread(header, 1, 54, file) != 54){
printf("\nNot a correct BMP file");
exit(1);
}
if (header[0] != 'B' || header[1] != 'M'){
printf("\nNot a correct BMP file");
exit(1);
}
dataPos = *(int*)&(header[0x0A]);
imageSize = *(int*)&(header[0x22]);
width = *(int*)&(header[0x12]);
height = *(int*)&(header[0x16]);
if (imageSize == 0) imageSize = width * height * 3;
if (dataPos == 0) dataPos = 54;
data = new unsigned char[imageSize];
fread(data, 1, imageSize, file);
fclose(file);
glGenTextures(1, &m_textureID);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
I currently am using shaders. I have both a fragment, and a vertex shader that are identical to the shaders described in the tutorial. I am both verifying each of them and making sure that they are compiling.
//-----Edit Two-----//
So I took durhass' suggestion, and set my color equal to a vec3(0.0, uv.x, uv.y) where uv is a vec2 that holds my texture coordinates, and this is what I get.
So I think I can see the root of the problem. I think that I am not storing my uvs correctly in my gl buffer. I don't think its a problem with the binary file's UVs considering that it works fine with my friend's engine. Il look into this, thank you for the suggestion, this just might lead to a fix!

Error C2664: 'auxDIBImageLoadW' : cannot convert parameter 1 from 'LPSTR' to 'LPCWSTR'

I was just modifying the code after reinstalling windows and VS 2012 Ultimate. The code (shown below) works perfectly fine before, but when I try to run the code right now, it gives following errors:
Error 1 error C2664: 'auxDIBImageLoadW' : cannot convert parameter 1 from 'LPSTR' to 'LPCWSTR'
Code:
void CreateTexture(GLuint textureArray[], LPSTR strFileName, int textureID)
{
AUX_RGBImageRec *pBitmap = NULL;
if (!strFileName) // Return from the function if no file name was passed in
return;
pBitmap = auxDIBImageLoad(strFileName); //<-Error in this line // Load the bitmap and store the data
if (pBitmap == NULL) // If we can't load the file, quit!
exit(0);
// Generate a texture with the associative texture ID stored in the array
glGenTextures(1, &textureArray[textureID]);
// This sets the alignment requirements for the start of each pixel row in memory.
// glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
// Bind the texture to the texture arrays index and init the texture
glBindTexture(GL_TEXTURE_2D, textureArray[textureID]);
// Build Mipmaps (builds different versions of the picture for distances - looks better)
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pBitmap->sizeX, pBitmap->sizeY, GL_RGB, GL_UNSIGNED_BYTE, pBitmap->data);
// Lastly, we need to tell OpenGL the quality of our texture map. GL_LINEAR is the smoothest.
// GL_NEAREST is faster than GL_LINEAR, but looks blochy and pixelated. Good for slower computers though.
// Read more about the MIN and MAG filters at the bottom of main.cpp
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
// Now we need to free the bitmap data that we loaded since openGL stored it as a texture
if (pBitmap) // If we loaded the bitmap
{
if (pBitmap->data) // If there is texture data
{
free(pBitmap->data); // Free the texture data, we don't need it anymore
}
free(pBitmap); // Free the bitmap structure
}
}
I tried this Link, This one too and also Tried this one too. but still getting error.
This function is used after initialization as:
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, "building1.bmp", 0);
CreateTexture(g_Texture, "clock.bmp", 0);
//list goes on
Can you help me out?
Change "LPSTR strFileName" to "LPCWSTR strFileName", "building1.bmp" to L"building1.bmp and "clock.bmp" to L"clock.bmp".
Always be careful because LPSTR is ASCII and LPCWSTR is Unicode. So if the function needs a Unicode variable (like this: L"String here") you can't give it a ASCII string.
The solutions are either:
Change your function prototype to take wide strings:
void CreateTexture(GLuint textureArray[], LPWSTR strFileName, int textureID)
//...
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, L"building1.bmp", 0);
CreateTexture(g_Texture, L"clock.bmp", 0);
or
Don't change your function prototype, but call the A version of the API function:
pBitmap = auxDIBImageLoadA(strFileName);
Recommended: Stick to wide strings and use the correct string types.

OpenSceneGraph float Image

Using C++ and OSG I'm trying to upload a float texture to my shader, but somehow it does not seem to work. At the end I posted some part of my code. Main question is how to create an osg::Image object using data from a float array. In OpenGL the desired code would be
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, width, height, 0,
GL_LUMINANCE, GL_FLOAT, data);
but in this case I have to use OSG.
The code runs fine when using
Image* image = osgDB::readImageFile("someImage.jpg");
instead of
image = new Image;
but I need to upload generated float data. It's also not possible to switch to unsigned char arrays as I need the GL_LUMINANCE32F_ARB data range in the shader code.
I hope someone can help me here as Google couldn't help me with it (googled for eg: osg float image). So here's my code.
using namespace std;
using namespace osg;
//...
float* data = new float[width*height];
fill_n(data, size, 1.0); // << I actually do this for testing purposes
Texture2D* texture = new Texture2D;
Image* image = new Image;
osg::State* state = new osg::State;
Uniform* uniform = new Uniform(Uniform::SAMPLER_2D, "texUniform");
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setDataVariance(osg::Object::DYNAMIC);
texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);
texture->setWrap(osg::Texture2D::WRAP_T, osg::Texture2D::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture2D::WRAP_S, osg::Texture2D::CLAMP_TO_EDGE);
if (data == NULL)
cout << "texdata null" << endl; // << this is not printed
image->setImage(width, height, 1, GL_LUMINANCE32F_ARB,
GL_LUMINANCE, GL_FLOAT,
(unsigned char*)data, osg::Image::USE_NEW_DELETE);
if (image->getDataPointer() == NULL)
cout << "datapointernull" << endl; // << this is printed
if (!image->valid())
exit(1); // << here the code exits (hard exit just for testing purposes)
osgDB::writeImageFile(*image, "blah.png");
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setImage(image);
camera->getOrCreateStateSet()->setTextureAttributeAndModes(4, texture);
state->setActiveTextureUnit(4);
texture->apply(*state);
uniform->set(4);
addProgrammUniform(uniform);
I found another way on the web, letting osg::Image create the data and fill it afterwards. But somehow this also does not work. I inserted this just after the new XYZ; lines.
image->setInternalTextureFormat(GL_LUMINANCE32F_ARB);
image->allocateImage(width,height,1,GL_LUMINANCE,GL_FLOAT);
if (image->data() == NULL)
cout << "null here?!" << endl; // << this is printed.
I use the following (simplified) code to create and set a floating-point texture:
// Create texture and image
osg::Texture* texture = new osg::Texture2D;
osg::Image* image = new osg::Image();
image->allocateImage(size, size, 1, GL_LUMINANCE, GL_FLOAT);
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::LINEAR);
texture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::LINEAR);
texture->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE);
texture->setImage(image);
// Set texture to node
osg::StateSet* stateSet = node->getOrCreateStateSet();
stateSet->setTextureAttributeAndModes(TEXTURE_UNIT_NUMBER, texture);
// Set data
float* data = reinterpret_cast<float*>(image->data());
/* ...data processing... */
image->dirty();
You may want to change some of the parameters, but this should give you a start. I believe that in your case TEXTURE_UNIT_NUMBER should be set to 4.
but I need to upload generated float data. It's also not possible to switch to unsigned char arrays as I need the GL_LUMINANCE32F_ARB data range in the shader code.
osgDB::writeImageFile(*image, "blah.png");
png files don't support 32bit per channel data, so you can not write your texture to file this way. See the libpng book:
PNG grayscale images support the widest range of pixel depths of any image type. Depths of 1, 2, 4, 8, and 16 bits are supported, covering everything from simple black-and-white scans to full-depth medical and raw astronomical images.[63]
[63] Calibrated astronomical image data is usually stored as 32-bit or 64-bit floating-point values, and some raw data is represented as 32-bit integers. Neither format is directly supported by PNG, although one could, in principle, design an ancillary chunk to hold the proper conversion information. Conversion of data with more than 16 bits of dynamic range would be a lossy transformation, however--at least, barring the abuse of PNG's alpha channel or RGB capabilities.
For 32 bit per channel, check out the OpenEXR format.
If however 16bit floating points (i.e. half floats) suffice, then you can go about it like so:
osg::ref_ptr<osg::Image> heightImage = new osg::Image;
int pixelFormat = GL_LUMINANCE;
int type = GL_HALF_FLOAT;
heightImage->allocateImage(tex_width, tex_height, 1, pixelFormat, type);
Now to actually use and write half floats, you can use the GLM library. You get the half float type by including <glm/detail/type_half.hpp>, which is then called hdata.
You now need to get the data pointer from your image and cast it to said format:
glm::detail::hdata *data = reinterpret_cast<glm::detail::hdata*>(heightImage->data());
This you can then access like you would a one dimensional array, so for example
data[currentRow*tex_width+ currentColumn] = glm::detail::toFloat16(3.1415f);
Not that if you write this same data to a bmp or tif file (using the osg plugins), the result will be incorrect. In my case I just got the left half of the intended image stretched onto the full width and not in grayscale, but in some strange color encoding.