I'm currently taking a C++ Game Libraries class, and for this class it's been our quarter long project to build a renderer that supports a variety of things. For the current lab our instructor gave us a tutorial on loading in a bmp into OpenGL manually, and applying it to our geometries.
Tutorial: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/
After following this tutorial step by step my textures are having some interesting behaviors. I've gone to other classmates, upperclassmen, and multiple instructors. None of them have an idea of what is happening. Considering that practically every one's code is identical for this Lab, and I'm the only one having this problem, I can't help but be confused.
I'm using the following OBJ, and texture. I convert the OBJ into a binary file in an OBJ converter that I built myself. My renderer takes in this binary file and sends down the data to OpenGL vertex buffers.
OBJ and Texture: http://www.models-resource.com/playstation_2/ratchetclank/model/6662/
My friend and I have the same binary file structure, so I gave him a copy of my binary file to check if the UVs were correct. His renders a perfectly textured chicken, while mind renders a chicken that looks like the texture was squished horizontally to 1/16th the length of the model, then repeated a bunch of times. I would post images, but I'm new here and don't have enough reputation to do so. Over the weekend I'l do my best to increase my reputation, because I really think that it would help to visually see my problem.
I would post my source code, however this project is approaching about 16,000 lines of code, and I doubt anyone is willing to search through that to find a stranger's problem.
Any suggestions would be helpful, I'm primarily curious on common mistakes that can be made when working with OpenGL textures, or .bmps in general.
Thank Ya.
//-----Edit One-----//
My friend's result
My result
I'm afraid that I'm not allowed to use other libraries. I probably should have mentioned that in my initial post.
Here is the code where I am loading in the bmp, I heard from one of the upper class-man at my school that I was ignoring something called bit depth. I know that the tutorial is pretty bad, and I'd rather learn to do it right, than to just barely scrape by. If any one has a good resource on this subject, I would greatly appreciate being pointed in that direction.
unsigned char header[54];
unsigned int dataPos;
unsigned int width, height;
unsigned int imageSize;
unsigned char * data;
FILE * file = fopen(filePath, "rb");
if (!file)
{
printf("\nImage could not be opened");
exit(1);
}
if (fread(header, 1, 54, file) != 54){
printf("\nNot a correct BMP file");
exit(1);
}
if (header[0] != 'B' || header[1] != 'M'){
printf("\nNot a correct BMP file");
exit(1);
}
dataPos = *(int*)&(header[0x0A]);
imageSize = *(int*)&(header[0x22]);
width = *(int*)&(header[0x12]);
height = *(int*)&(header[0x16]);
if (imageSize == 0) imageSize = width * height * 3;
if (dataPos == 0) dataPos = 54;
data = new unsigned char[imageSize];
fread(data, 1, imageSize, file);
fclose(file);
glGenTextures(1, &m_textureID);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
I currently am using shaders. I have both a fragment, and a vertex shader that are identical to the shaders described in the tutorial. I am both verifying each of them and making sure that they are compiling.
//-----Edit Two-----//
So I took durhass' suggestion, and set my color equal to a vec3(0.0, uv.x, uv.y) where uv is a vec2 that holds my texture coordinates, and this is what I get.
So I think I can see the root of the problem. I think that I am not storing my uvs correctly in my gl buffer. I don't think its a problem with the binary file's UVs considering that it works fine with my friend's engine. Il look into this, thank you for the suggestion, this just might lead to a fix!
Related
I'm trying to load a 2bpp image format into OpenGL textures. The format is just a bunch of indexed-color pixels, 4 pixels fit into one byte since it's 2 bits per pixel.
My current code works fine in all cases except if the image's width is not divisible by 4. I'm not sure if this has something to do with the data being 2bpp, as it's converted to a pixel unsigned byte array (GLubyte raw[4096]) anyway.
16x16? Displays fine.
16x18? Displays fine.
18x16? Garbled mess.
22x16? Garbled mess.
etc.
Here is what I mean by works VS. garbled mess (resized to 3x):
Here is my code:
GLubyte raw[4096];
std::ifstream bin(file, std::ios::ate | std::ios::binary | std::ios::in);
unsigned short size = bin.tellg();
bin.clear();
bin.seekg(0, std::ios::beg);
// first byte is height; width is calculated from a combination of filesize & height
// this part works correctly every time
char ch = 0;
bin.get(ch);
ubyte h = ch;
ubyte w = ((size-1)*4)/h;
printf("%dx%d Filesize: %d (%d)\n", w, h, size-1, (size-1)*4);
// fill it in with 0's which means transparent.
for (int ii = 0; ii < w*h; ++ii) {
if (ii < 4096) {
raw[ii] = 0x00;
} else {
return false;
}
}
size_t i;
while (bin.get(ch)) {
// 2bpp mode
// take each byte in the file, split it into 4 bytes.
raw[i] = (ch & 0x03);
raw[i+1] = (ch & 0x0C) >> 2;
raw[i+2] = (ch & 0x30) >> 4;
raw[i+3] = (ch & 0xC0) >> 6;
i = i + 4;
}
texture_sizes[id][1] = w;
texture_sizes[id][2] = h;
glGenTextures(1, &textures[id]);
glBindTexture(GL_TEXTURE_2D, textures[id]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLenum fmt = GL_RED;
GLint swizzleMask[] = { GL_RED, GL_RED, GL_RED, 255 };
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
glTexImage2D(GL_TEXTURE_2D, 0, fmt, w, h, 0, fmt, GL_UNSIGNED_BYTE, raw);
glBindTexture(GL_TEXTURE_2D, 0);
What's actually happening for some reason, the image is being treated as if it's 20x24; OpenGL(probably?) seems to be forcefully rounding the width up to the nearest number that's divisible by 4. That would be 20. This is despite the w value in my code being correct at 18, it's as if OpenGL is saying "no, I'm going to make it be 20 pixels wide internally."
However since the texture is still being rendered as an 18x24 rectangle, the last 2 pixels of each row - that should be the first 2 pixels of the next row - are just... not being rendered.
Here's what happens when I force my code's w value to always be 20, instead of 18. (I just replaced w = ((size-1)*4)/h with w = 20):
And here's when my w value is 18 again, as in the first image:
As you can see, the image is a whole 2 pixels wider; those 2 pixels at the end of every row should be on the next row, because the width is supposed to be 18, not 20!
This proves that for whatever reason, internally, the texture bytes were parsed and stored as if they were 20x24 instead of 18x24. Why that is, I can't figure out, and I've been trying to solve this specific problem for days. I've verified that the raw bytes and everything are all the values I expect; there's nothing wrong with my data format. Is this an OpenGL bug? Why is OpenGL forcing internally storing my texture as 20x24, when I clearly told it to store it as 18x24? The rest of my code recognizes that I told the width to be 18 not 20, it's just OpenGL itself that doesn't.
Finally, one more note: I've tried loading the exact same file, in the exact same way with the LÖVE framework (Lua), exact same size and exact same bytes as my C++ version and all. And I dumped those bytes into love.image.newImageData and it displays just fine!
That's the final proof that it's not my format's problem; it's very likely OpenGL's problem or something in the code above that I'm overlooking.
How can I solve this problem? The problem being that OpenGL is storing textures internally with an incorrect width value (20 as opposed to the value of 18 that I gave the function) therefore loading the raw unsigned bytes incorrectly.
I would like to create devIL image from raw texture data, but I can't seem to find a way to do it. The proper way seems to be ilLoadL with IL_RAW, but I can't get it to work. The documentation in here says that that there should be 13-byte header in the data, so i just put meaningless data there. If I put 0 to "size" parameter of ilLoadL,
I'll get black texture, no matter what. Otherwise my program refuses to draw anything. ilIsImage returns true, and I can create openGL texture from it just fine. The code works if I load texture from file.
It's not much, but here's my code so far:
//Loading:
ilInit();
iluInit();
ILuint ilID;
ilGenImages(1, &ilID);
ilBindImage(ilID);
ilEnable(IL_ORIGIN_SET);
ilOriginFunc(IL_ORIGIN_LOWER_LEFT);
//Generate 13-byte header and fill it with meaningless numbers
for (int i = 0; i < 13; ++i){
data.insert(data.begin() + i, i);
}
//This fails.
if (!ilLoadL(IL_RAW, &data[0], size)){
std::cout << "Fail" << std::endl;
}
Texture creation:
ilBindImage(ilId[i]);
ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE);
glBindTexture(textureTarget, id[i]);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(textureTarget, GL_TEXTURE_MIN_FILTER, filters[i]);
glTexParameterf(textureTarget, GL_TEXTURE_MAG_FILTER, filters[i]);
glTexImage2D(textureTarget, 0, GL_RGBA,
ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT),
0, GL_RGBA, GL_UNSIGNED_BYTE, ilGetData());
If an image format has a header, you can generally assume it contains some important information necessary to correctly read the rest of the file. Filling it with "meaningless data" is inadvisable at best.
Since there is no actual struct in DevIL for the .raw header, let us take a look at the implementation of iLoadRawInternal () to figure out what those first 13-bytes are supposed to be.
// Internal function to load a raw image
ILboolean iLoadRawInternal()
{
if (iCurImage == NULL) {
ilSetError(IL_ILLEGAL_OPERATION);
return IL_FALSE;
}
iCurImage->Width = GetLittleUInt(); /* Bytes: 0-3 {Image Width} */
iCurImage->Height = GetLittleUInt(); /* Bytes: 4-7 {Image Height} */
iCurImage->Depth = GetLittleUInt(); /* Bytes: 8-11 {Image Depth} */
iCurImage->Bpp = (ILubyte)igetc(); /* Byte: 12 {Bytes per-pixel} */
NOTE: The /* comments */ are my own
GetLittleUInt () reads a 32-bit unsigned integer in little-endian order and advances the read location appropriately. igetc () does the same for a single byte.
This is equivalent to the following C structure (minus the byte order consideration):
struct RAW_HEADER {
uint32_t width;
uint32_t height;
uint32_t depth; // This is depth as in the number of 3D slices (not bit depth)
uint8_t bpp; // **Bytes** per-pixel (1 = Luminance, 3 = RGB, 4 = RGBA)
};
If you read the rest of the implementation of iLoadRawInternal () in il_raw.c, you will see that without proper values in the header DevIL will not be able to calculate the correct file size. Filling in the correct values should help.
I have a single texture working, but I cannot figure out how to switch between 2, or how glBindTexture actually works.
I copied this from somewhere and it works, and I believe that I understand most of it. Problem is, I can uncomment glBindTexture(GL_TEXTURE_2D, texture[0].texID); and it works. Which I don't understand. This code shouldn't be a problem, I think it's something simple I am missing.
bool LoadTGA(TextureImage *texture, char *filename) // Loads A TGA File Into Memory
{
GLubyte TGAheader[12]={0,0,2,0,0,0,0,0,0,0,0,0}; // Uncompressed TGA Header
GLubyte TGAcompare[12]; // Used To Compare TGA Header
GLubyte header[6]; // First 6 Useful Bytes From The Header
GLuint bytesPerPixel; // Holds Number Of Bytes Per Pixel Used In The TGA File
GLuint imageSize; // Used To Store The Image Size When Setting Aside Ram
GLuint temp; // Temporary Variable
GLuint type=GL_RGBA; // Set The Default GL Mode To RBGA (32 BPP)
system("cd");
FILE *file = fopen(filename, "r"); // Open The TGA File
if( file==NULL || // Does File Even Exist?
fread(TGAcompare,1,sizeof(TGAcompare),file)!=sizeof(TGAcompare) || // Are There 12 Bytes To Read?
memcmp(TGAheader,TGAcompare,sizeof(TGAheader))!=0 || // Does The Header Match What We Want?
fread(header,1,sizeof(header),file)!=sizeof(header)) // If So Read Next 6 Header Bytes
{
if (file == NULL) // Did The File Even Exist? *Added Jim Strong*
{
perror("Error");
return false; // Return False
}
else
{
fclose(file); // If Anything Failed, Close The File
perror("Error");
return false; // Return False
}
}
texture->width = header[1] * 256 + header[0]; // Determine The TGA Width (highbyte*256+lowbyte)
texture->height = header[3] * 256 + header[2]; // Determine The TGA Height (highbyte*256+lowbyte)
if( texture->width <=0 || // Is The Width Less Than Or Equal To Zero
texture->height <=0 || // Is The Height Less Than Or Equal To Zero
(header[4]!=24 && header[4]!=32)) // Is The TGA 24 or 32 Bit?
{
fclose(file); // If Anything Failed, Close The File
return false; // Return False
}
texture->bpp = header[4]; // Grab The TGA's Bits Per Pixel (24 or 32)
bytesPerPixel = texture->bpp/8; // Divide By 8 To Get The Bytes Per Pixel
imageSize = texture->width*texture->height*bytesPerPixel; // Calculate The Memory Required For The TGA Data
texture->imageData=(GLubyte *)malloc(imageSize); // Reserve Memory To Hold The TGA Data
if( texture->imageData==NULL || // Does The Storage Memory Exist?
fread(texture->imageData, 1, imageSize, file)!=imageSize) // Does The Image Size Match The Memory Reserved?
{
if(texture->imageData!=NULL) // Was Image Data Loaded
free(texture->imageData); // If So, Release The Image Data
fclose(file); // Close The File
return false; // Return False
}
for(GLuint i=0; i<int(imageSize); i+=bytesPerPixel) // Loop Through The Image Data
{ // Swaps The 1st And 3rd Bytes ('R'ed and 'B'lue)
temp=texture->imageData[i]; // Temporarily Store The Value At Image Data 'i'
texture->imageData[i] = texture->imageData[i + 2]; // Set The 1st Byte To The Value Of The 3rd Byte
texture->imageData[i + 2] = temp; // Set The 3rd Byte To The Value In 'temp' (1st Byte Value)
}
fclose (file); // Close The File
// Build A Texture From The Data
glGenTextures(1, &texture[0].texID); // Generate OpenGL texture IDs
//glBindTexture(GL_TEXTURE_2D, texture[0].texID); // Bind Our Texture
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Linear Filtered
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Linear Filtered
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
if (texture[0].bpp==24) // Was The TGA 24 Bits
{
type=GL_RGB; // If So Set The 'type' To GL_RGB
}
glTexImage2D(GL_TEXTURE_2D, 0, type, texture[0].width, texture[0].height, 0, type, GL_UNSIGNED_BYTE, texture[0].imageData);
return true;
Now when I draw I have this:
glEnable(GL_TEXTURE_2D);
//glBindTexture(GL_TEXTURE_2D, texturesList[0].texID);
glColor4f(1, 1, 1, 1);
glBegin(GL_POLYGON);
glTexCoord2f(0.0f, 0.0f);
glVertex4f(-50, 0, 50, 1);
glTexCoord2f(50.0f, 0.0f);
glVertex4f(-50, 0, -50, 1);
glTexCoord2f(50.0f, 50.0f);
glVertex4f(50, 0, -50, 1);
glTexCoord2f(0.0f, 50.0f);
glVertex4f(50, 0, 50, 1);
glEnd();
glDisable(GL_TEXTURE_2D);
And this at the start of the program:
LoadTGA(&texturesList[0], "\snow.tga");
LoadTGA(&texturesList[1], "\snow2.tga");
So after it loads them texturesList contains 2 textures with ids of 1 and 2.
So do I not call glBindTexture(GL_TEXTURE_2D, texturesList[0].texID); before I draw to choose the right texture? Because I have to tell glTexCoord2f what to operate on?
It works perfectly if I never call glBind in my draw, but if I do nothing shows up. What I am more confused about is that it glBind doesn't need to be called for it to work.
But the last texture I create gets shown(snow2.tga).
If I can clear anything up let me know.
So do I not call glBindTexture (GL_TEXTURE_2D, texturesList[0].texID); before I draw to choose the right texture? Because I have to tell glTexCoord2f what to operate on?
glTexCoord2f (...) operates at the per-vertex level. It is independent of what texture you have loaded, that is actually the whole point. You can map any texture you want simply by changing which texture is bound when you draw.
It works perfectly if I never call glBind in my draw, but if I do nothing shows up. What I am more confused about is that it glBind doesn't need to be called for it to work.
You need to bind your texture in LoadTGA (...) because "generating" a name alone is insufficient.
All that glGenTextures (...) does is return one or more unused names from the list of names OpenGL has for textures and reserve them so that a subsequent call does not give out the same name.
It does not actually create a texture, the name returned does not become a texture until you bind it. Until that time the name is merely in a reserved state. Commands such as glTexParameterf (...) and glTexImage2D (...) operate on the currently bound texture, so in addition to generating a texture you must also bind one before making those calls.
Now, onto some other serious issues that are not related to OpenGL:
Do whatever possible to get rid of your system ("cd"); line. There are much better ways of changing the working directory.
SetCurrentDirectory (...) (Windows)
chdir (...) (Linux/OS X/BSD/POSIX)
Do not use the file name "\snow.tga" as a string literal, because a C compiler may see "\" and interpret whatever comes after it as part of an escape sequence. Consider "\\snow.tga" instead or "/snow.tga" (yes, this even works on Windows - "\" is a terrible character to use as a path separator).
"\s" is not actually a recognized escape sequence by C compilers, but using "\" to begin your path is playing with fire because there are a handful of reserved characters where it will actually matter. "\fire.tga", for instance, is actually shorthand for {0x0c} "ire.tga". The compiler will replace your string literal with that sequence of bytes and will leave you scratching your head trying to figure out what went wrong.
Using C++ and OSG I'm trying to upload a float texture to my shader, but somehow it does not seem to work. At the end I posted some part of my code. Main question is how to create an osg::Image object using data from a float array. In OpenGL the desired code would be
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, width, height, 0,
GL_LUMINANCE, GL_FLOAT, data);
but in this case I have to use OSG.
The code runs fine when using
Image* image = osgDB::readImageFile("someImage.jpg");
instead of
image = new Image;
but I need to upload generated float data. It's also not possible to switch to unsigned char arrays as I need the GL_LUMINANCE32F_ARB data range in the shader code.
I hope someone can help me here as Google couldn't help me with it (googled for eg: osg float image). So here's my code.
using namespace std;
using namespace osg;
//...
float* data = new float[width*height];
fill_n(data, size, 1.0); // << I actually do this for testing purposes
Texture2D* texture = new Texture2D;
Image* image = new Image;
osg::State* state = new osg::State;
Uniform* uniform = new Uniform(Uniform::SAMPLER_2D, "texUniform");
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setDataVariance(osg::Object::DYNAMIC);
texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);
texture->setWrap(osg::Texture2D::WRAP_T, osg::Texture2D::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture2D::WRAP_S, osg::Texture2D::CLAMP_TO_EDGE);
if (data == NULL)
cout << "texdata null" << endl; // << this is not printed
image->setImage(width, height, 1, GL_LUMINANCE32F_ARB,
GL_LUMINANCE, GL_FLOAT,
(unsigned char*)data, osg::Image::USE_NEW_DELETE);
if (image->getDataPointer() == NULL)
cout << "datapointernull" << endl; // << this is printed
if (!image->valid())
exit(1); // << here the code exits (hard exit just for testing purposes)
osgDB::writeImageFile(*image, "blah.png");
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setImage(image);
camera->getOrCreateStateSet()->setTextureAttributeAndModes(4, texture);
state->setActiveTextureUnit(4);
texture->apply(*state);
uniform->set(4);
addProgrammUniform(uniform);
I found another way on the web, letting osg::Image create the data and fill it afterwards. But somehow this also does not work. I inserted this just after the new XYZ; lines.
image->setInternalTextureFormat(GL_LUMINANCE32F_ARB);
image->allocateImage(width,height,1,GL_LUMINANCE,GL_FLOAT);
if (image->data() == NULL)
cout << "null here?!" << endl; // << this is printed.
I use the following (simplified) code to create and set a floating-point texture:
// Create texture and image
osg::Texture* texture = new osg::Texture2D;
osg::Image* image = new osg::Image();
image->allocateImage(size, size, 1, GL_LUMINANCE, GL_FLOAT);
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::LINEAR);
texture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::LINEAR);
texture->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE);
texture->setImage(image);
// Set texture to node
osg::StateSet* stateSet = node->getOrCreateStateSet();
stateSet->setTextureAttributeAndModes(TEXTURE_UNIT_NUMBER, texture);
// Set data
float* data = reinterpret_cast<float*>(image->data());
/* ...data processing... */
image->dirty();
You may want to change some of the parameters, but this should give you a start. I believe that in your case TEXTURE_UNIT_NUMBER should be set to 4.
but I need to upload generated float data. It's also not possible to switch to unsigned char arrays as I need the GL_LUMINANCE32F_ARB data range in the shader code.
osgDB::writeImageFile(*image, "blah.png");
png files don't support 32bit per channel data, so you can not write your texture to file this way. See the libpng book:
PNG grayscale images support the widest range of pixel depths of any image type. Depths of 1, 2, 4, 8, and 16 bits are supported, covering everything from simple black-and-white scans to full-depth medical and raw astronomical images.[63]
[63] Calibrated astronomical image data is usually stored as 32-bit or 64-bit floating-point values, and some raw data is represented as 32-bit integers. Neither format is directly supported by PNG, although one could, in principle, design an ancillary chunk to hold the proper conversion information. Conversion of data with more than 16 bits of dynamic range would be a lossy transformation, however--at least, barring the abuse of PNG's alpha channel or RGB capabilities.
For 32 bit per channel, check out the OpenEXR format.
If however 16bit floating points (i.e. half floats) suffice, then you can go about it like so:
osg::ref_ptr<osg::Image> heightImage = new osg::Image;
int pixelFormat = GL_LUMINANCE;
int type = GL_HALF_FLOAT;
heightImage->allocateImage(tex_width, tex_height, 1, pixelFormat, type);
Now to actually use and write half floats, you can use the GLM library. You get the half float type by including <glm/detail/type_half.hpp>, which is then called hdata.
You now need to get the data pointer from your image and cast it to said format:
glm::detail::hdata *data = reinterpret_cast<glm::detail::hdata*>(heightImage->data());
This you can then access like you would a one dimensional array, so for example
data[currentRow*tex_width+ currentColumn] = glm::detail::toFloat16(3.1415f);
Not that if you write this same data to a bmp or tif file (using the osg plugins), the result will be incorrect. In my case I just got the left half of the intended image stretched onto the full width and not in grayscale, but in some strange color encoding.
I have some code I am trying to run on my laptop, but it keeps giving a 'FALLBACK' error. I don't know what it is, but it is quite annoying. It should just print 'Hello world!', but it prints it twice and changes the colours a little bit.
The same code is running perfectly on my PC.
I've searched a long time to solve this problem, but couldn't find anything. I hope some people out here can help me?
Here is my code:
// Template, major revision 3
#include "string.h"
#include "surface.h"
#include "stdlib.h"
#include "template.h"
#include "game.h"
using namespace Tmpl8;
void Game::Init()
{
// put your initialization code here; will be executed once
}
void Game::Tick( float a_DT )
{
m_Screen->Clear( 0 );
m_Screen->Print( "hello world", 2, 2, 0xffffff );
m_Screen->Line( 2, 10, 66, 10, 0xffffff );
}
Thanks in advance! :-)
Edit:
It gives an error on this line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Maybe this could help?
Looking at this post from OpenGl Forums and seeing that you're using OpenGL, I may have an idea.
You say that the code works fine on your computer but not on your notebook. There you have a possible hardware (different video cards) and software (different OpenGL version/support).
What may be happening is that the feature you want to use from OpenGL is not supported on your notebook. Also, you are creating a texture without data (the NULL on the last parameter), this will probably give you errors such as buffer overflow.
EDIT:
You may take a look on GLEW. It has a tool called "glewinfo" that looks for all features available on your hardware/driver. It generates a file by the same name on the same path of the executable. For the power of two textures, look for GL_ARB_texture_non_power_of_two.
EDIT 2:
As you said on the comments, without the GL_ARB_texture_non_power_of_two extension, and the texture having size of 640x480, glTexture will give you an error, and all the code that depends on it will likely fail. To fix it, you have to stretch the dimensions of the image to the next power of two. In this case, it would become 1024x512. Remember that the data that you supply to glTexture MUST have these dimensions.
Seeing that the error comes from the line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Here are the reasons why that function could return GL_INVALID_VALUE. Since I can't check it for sure, you'll have to go through this list and make sure which one of them caused this issue.
GL_INVALID_VALUE is generated if level is less than 0.
GL_INVALID_VALUE may be generated if level is greater than log 2 max , where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if internalFormat is not 1, 2, 3, 4, or one of the accepted resolution and format symbolic constants.
GL_INVALID_VALUE is generated if width or height is less than 0 or greater than 2 + GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if non-power-of-two textures are not supported and the width or height cannot be represented as 2 k + 2 border for some integer value of k.
GL_INVALID_VALUE is generated if border is not 0 or 1.
EDIT: I believe it could be the non-power-of-two texture size that's causing the problem. Rounding your texture size to the nearest power-of-two should probably fix the issue.
EDIT2: To test which of these is causing an issue, let's start with the most common issue; trying to create a texture of non-power-of-two size. Create an image of size 256x256 and call this function with 256 for width and height. If the function still fails I would try putting the level to 0 (keeping the power-of-two size still in place).
BUT DANG you don't have data for your image? It's set as NULL. You need to load the image data into memory and pass it to the function to create the texture. And you aren't doing that. Read how to load images from a file or how to render to texture, whichever is relevant to you.
This is to give you a better answer as a fresh post. First you need this helper function to load a bmp file into memory.
unsigned int LoadTex(string Image)
{
unsigned int Texture;
FILE* img = NULL;
img = fopen(Image.c_str(),"rb");
unsigned long bWidth = 0;
unsigned long bHeight = 0;
DWORD size = 0;
fseek(img,18,SEEK_SET);
fread(&bWidth,4,1,img);
fread(&bHeight,4,1,img);
fseek(img,0,SEEK_END);
size = ftell(img.file) - 54;
unsigned char *data = (unsigned char*)malloc(size);
fseek(img,54,SEEK_SET); // image data
fread(data,size,1,img);
fclose(img);
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bWidth, bHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
if (data)
free(data);
return Texture;
}
Courtesy: Post by b0x in Game Deception.
Then you need to call it in your code likewise:
unsigned int texture = LoadTex("example_tex.bmp");