I have an Nvidia GTX 970, with the latest (441.66) driver for Win 10 x64 (18362 build), which is obviously fully OpenGL 4.6 compliant, and currently compiling an app with VS2017.
My problem is, that I seem to be unable to use any other texture type then GL_UNSIGNED_BYTE. I'm currently trying to set up a single channel, unsigned integer (32 bit) texture, but however I try to allocate the texture, OpenGL immediately raises the GL_INVALID_VALUE error, and the shader's result turns all black.
So far I tryed allocating immutably:
glTexStorage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048);
And mutably:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, textureData);
I tried signed int too, the same thing. I also checked with NSight VS edition, for UINT 2D textures, my max resolution is 16384x16384, so that's not the issue. Also, according to NSight, uint textures are fully supported by the OpenGL driver.
What am I missing here?
Minimum reproducable version:
#include <iostream>
#include <GL/glew.h>
#include <SDL.h>
void OGLErrorCheck()
{
const GLenum errorCode = glGetError();
if (errorCode != GL_NO_ERROR)
{
const GLubyte* const errorString = gluErrorString(errorCode);
std::cout << errorString;
}
}
int main(int argc, char* argv[])
{
glewInit();
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, nullptr);
OGLErrorCheck();
getchar();
return 0;
}
This yields GL_INVALID_OPERATION.
This is linked with the latest SDL and GLEW, both free software, available for download at https://www.libsdl.org/ and http://glew.sourceforge.net/ respectively.
From the specs about glTexStorage2D:
void glTexStorage2D(
GLenum target,
GLsizei levels,
GLenum internalformat,
GLsizei width,
GLsizei height
);
[…]
GL_INVALID_VALUE is generated if width, height or levels are less than 1
And the value for levels you pass to glTexStorage2D is 0.
First of all you have to create an OpenGL Context. e.g.:
(See also Using OpenGL With SDL)
if (SDL_Init(SDL_INIT_VIDEO) < 0)
return 0;
SDL_Window *window = SDL_CreateWindow("ogl wnd", 0, 0, width, height, SDL_WINDOW_OPENGL);
if (window == nullptr)
return 0;
SDL_GLContext context = SDL_GL_CreateContext(window);
if (glewInit() != GLEW_OK)
return 0;
Then you have to generate an texture name by glGenTextures:
GLuint tobj;
glGenTextures(1, &tobj);
After that you've to bind the named texture to a texturing target by glBindTexture:
glBindTexture(GL_TEXTURE_2D, tobj);
Finally you can specify the two-dimensional texture image by glTexImage2D
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, nullptr);
Note, the texture format has to be GL_RED_INTEGER rather than GL_RED, because the source texture image has to be interpreted as integral data, rather than normalized floating point data. The format and type parameter specify the format of the source data. The internalformat parameter specifies the format of the target texture image.
Set the texture parameters by glTexParameter ( this can be done before glTexImage2D, too):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
If you do not generate mipmaps (by glGenerateMipmap), then setting the GL_TEXTURE_MIN_FILTER is important. Since the default filter is GL_NEAREST_MIPMAP_LINEAR the texture would be mipmap incomplete, if you don not change the minifying function to GL_NEAREST or GL_LINEAR.
And mutably: glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, textureData);
I tried signed int too, the same thing. I also checked with NSight VS edition, for UINT 2D textures, my max resolution is 16384x16384, so that's not the issue. Also, according to NSight, uint textures are fully supported by the OpenGL driver.
What am I missing here?
For unormalized integer texture formatss, the format parameter of glTex[Sub]Image is not allowed to be just GL_RED, you have to use GL_RED_INTEGER.The format,type combination GL_RED, GL_UNSIGNED_INT is for specifying normalized fixed point or floating point formats only.
Related
I have a 3D graphics application that is exhibiting bad texturing behavior (specifically: a specific texture is showing up as black when it shouldn't be). I have isolated the texture data in the following call:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, fmt->gl_type, data)
I've inspected all of the values in the call and have verified they aren't NULL. Is there a way to use all of this data to save to the (Linux) filesystem a bitmap/png/some viewable format so that I can inspect the texture to verify it isn't black/some sort of garbage? It case it matters I'm using OpenGL ES 2.0 (GLES2).
If you want to read the pixels from a texture image in OpenGL ES, then you have to attach the texture to a framebuffer and read the color plane from the framebuffer by glReadPixels
GLuint textureObj = ...; // the texture object - glGenTextures
GLuint fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureObj, 0);
int data_size = mWidth * mHeight * 4;
GLubyte* pixels = new GLubyte[mWidth * mHeight * 4];
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &fbo);
All the used functions in this code snippet are supported by OpenGL ES 2.0.
Note, in desktop OpenGL there is glGetTexImage, which can be use read pixel data from a texture. This function doesn't exist in OpenGL ES.
To write an image to a file (in c++), I recommend to use a library like STB library, which can be found at GitHub - nothings/stb.
To use the STB library library it is sufficient to include the header files (It is not necessary to link anything):
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include <stb_image_write.h>
Use stbi_write_bmp to write a BMP file:
stbi_write_bmp( "myfile.bmp", width, height, 4, pixels );
Note, it is also possible to write other file formats by stbi_write_png, stbi_write_tga or stbi_write_jpg.
I wanted to try making a game with OpenGL and GLUT, but as it turns out, GLUT is not well adapted to making games. So I switched to using SDL 1.2 (this is for a sort of competition, so I can't use SDL 2). When I saw I could use OpenGL within SDL, I decided to do that, since I had already written a majority of my code with OpenGL. Now, I'm having issues trying to load an image into an SDL_Surface and then converting it to an OpenGL texture, with OpenGL blending enabled. Here is the code I'm using (loadImage loads an SDL_Surface & loadTexture loads into an OpenGL texture):
SDL_Surface * Graphics::loadImage(const char * filename) {
SDL_Surface *loaded = nullptr;
SDL_Surface *optimized = nullptr;
loaded = IMG_Load(filename);
if (loaded) {
optimized = SDL_DisplayFormat(loaded);
SDL_FreeSurface(loaded);
}
return optimized;
}
GLuint Graphics::loadTexture(const char * filename, GLuint oldTexId) {
//return SOIL_load_OGL_texture(filename, SOIL_LOAD_AUTO, oldTexId, SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA);
GLuint texId = 0;
SDL_Surface *s = loadImage(filename);
if (!s) return 0;
if (oldTexId) glDeleteTextures(1, &oldTexId);
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_2D, texId);
int format;
if (s->format->BytesPerPixel == 4) {
if (s->format->Rmask == 0x000000ff)
format = GL_RGBA;
else
format = GL_BGRA;
} else if (s->format->BytesPerPixel == 3) {
if (s->format->Rmask == 0x000000ff)
format = GL_RGB;
else
format = GL_BGR;
}
glTexImage2D(GL_TEXTURE_2D, 0, s->format->BytesPerPixel, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(s);
return texId;
}
I've been searching online for a solution to this issue quite a bit, and none of the solutions I found worked. This code actually works when I don't glEnable(GL_BLEND), but when I do enable it, it doesn't show anything on screen anymore. I am fairly new to OpenGL, and I'm not sure I'm using the glTexImage2D correctly.
The way I was loading images before I converted to SDL was using the SOIL library, and when I replace the loadTexture function's body with that commented out first line, it actually works fine, but I'd rather have less external libraries, and do everything graphics-side with SDL & OpenGL.
The third argument of glTexImage2D is wrong:
glTexImage2D(GL_TEXTURE_2D, 0, s->format->BytesPerPixel, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
The third argument is internalFormat and must be one of the base internal formats:
GL_DEPTH_COMPONENT
GL_DEPTH_STENCIL
GL_RED
GL_RG
GL_RGB
GL_RGBA
Or one of the sized internal formats, which specifies the bits per channel.
So in other words your third argument should be either:
GL_RGB
GL_RGB8
GL_RGBA
GL_RGBA8
If you're using an 8 bit per channel texture.
Whereas the 7th argument, format, can be either RGB or BGR, (including the alpha version), the third argument, internalFormat can only be RGB and not the other way around.
So where you check the red mask and change the format is still good for the 7th argument, the third argument (internalFormat) should be either GL_RGB or GL_RGBA. Or optionally the sized version GL_RGB8 or GL_RGBA8.
glTexImage2D(GL_TEXTURE_2D, 0, /*GL_RGB or GL_RGBA*/, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
Docs
How can I attach a depth-buffer to my framebufferobject when I use GL_TEXTURE_2D_MULTISAMPLE. glCheckFramebufferStatus(msaa_fbo) from the code below returns 0. From the documentation this seems to mean that msaa_fba is not a framebuffer, but it is created from glGenFramebuffers(1, &msaa_fbo);.
Additionally, if an error occurs, zero is returned.
GL_INVALID_ENUM is generated if target is not GL_DRAW_FRAMEBUFFER, GL_READ_FRAMEBUFFER or GL_FRAMEBUFFER.
The error is 1280, which I think means GL_INVALID_ENUM.
If i remove the depth buffer attachment the program runs and renders (although without depth testing). The error is still present when it runs then. With the depth attachment included there is an error (1286) after every frame, which is INVALID_FRAMEBUFFER. I don't know how to continue from here. Some examples I've looked at do somewhat the same but seem to work.
glGenTextures(1, &render_target_texture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, render_target_texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, NUM_SAMPLES, GL_RGBA8, width, height, false);
glGenFramebuffers(1, &msaa_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, msaa_fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, render_target_texture, 0);
glGenRenderbuffers(1, &depth_render_buffer);
glBindRenderbuffer(GL_RENDERBUFFER, depth_render_buffer);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, NUM_SAMPLES, GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_render_buffer);
GLenum status = glCheckFramebufferStatus(msaa_fbo);
Most of the code is from this.
EDIT
The status check was wrong, it should've been GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);. Now there is no error when I don't include the depth. When I include depth I get this error now: GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE.
EDIT 2
Documentation claims that this happens when GL_TEXTURE_SAMPLES and GL_RENDERBUFFER:SAMPLES don't match.
GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE is returned if the value of GL_RENDERBUFFER_SAMPLES is not the same for all attached renderbuffers; if the value of GL_TEXTURE_SAMPLES is the not same for all attached textures; or, if the attached images are a mix of renderbuffers and textures, the value of GL_RENDERBUFFER_SAMPLES does not match the value of GL_TEXTURE_SAMPLES.
But they do!
I've tested them like this:
std::cout << "GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE" << std::endl;
GLsizei gts, grs;
glGetTexLevelParameteriv(GL_TEXTURE_2D_MULTISAMPLE, 0, GL_TEXTURE_SAMPLES, >s);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_SAMPLES, &grs);
std::cout << "GL_TEXTURE_SAMPLES: " << gts << std::endl;
std::cout << "GL_RENDERBUFFER_SAMPLES: " << grs << std::endl;
Output is:
GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE
GL_TEXTURE_SAMPLES: 8
GL_RENDERBUFFER_SAMPLES: 8
EDIT 3
Worked around this by using two textures instead of a texture and a renderbuffer like this:
glGenFramebuffers(1, &msaa_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, msaa_fbo);
glGenTextures(1, &render_texture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, render_texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, NUM_SAMPLES, GL_RGBA8, width, height, false);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, render_texture, 0);
glGenTextures(1, &depth_texture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, depth_texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, NUM_SAMPLES, GL_DEPTH_COMPONENT, width, height, false);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D_MULTISAMPLE, depth_texture, 0);
I'm am still interested in what was wrong with the original implementation, so question is still standing.
You need to used fixed sample locations for the texture if you mix it with renderbuffers. From the spec, in section "Framebuffer Completeness":
The value of TEXTURE_FIXED_SAMPLE_LOCATIONS is the same for all attached textures; and, if the attached images are a mix of renderbuffers and textures, the value of TEXTURE_FIXED_SAMPLE_LOCATIONS must be TRUE for all attached textures.
{FRAMEBUFFER_INCOMPLETE_MULTISAMPLE}
To avoid this error condition, you the call for setting up the texture storage needs to be changed to:
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,
NUM_SAMPLES, GL_RGBA8, width, height, GL_TRUE);
I'm trying to use luminance textures on my ATI graphics card.
The problem: I'm not being able to correctly retrieve data from my GPU. Whenever I try to read it (using glReadPixels), all it gives me is an 'all-ones' array (1.0, 1.0, 1.0...).
You can test it with this code:
#include <stdio.h>
#include <stdlib.h>
#include <GL/glew.h>
#include <GL/glut.h>
static int arraySize = 64;
static int textureSize = 8;
//static GLenum textureTarget = GL_TEXTURE_2D;
//static GLenum textureFormat = GL_RGBA;
//static GLenum textureInternalFormat = GL_RGBA_FLOAT32_ATI;
static GLenum textureTarget = GL_TEXTURE_RECTANGLE_ARB;
static GLenum textureFormat = GL_LUMINANCE;
static GLenum textureInternalFormat = GL_LUMINANCE_FLOAT32_ATI;
int main(int argc, char** argv)
{
// create test data and fill arbitrarily
float* data = new float[arraySize];
float* result = new float[arraySize];
for (int i = 0; i < arraySize; i++)
{
data[i] = i + 1.0;
}
// set up glut to get valid GL context and
// get extension entry points
glutInit (&argc, argv);
glutCreateWindow("TEST1");
glewInit();
// viewport transform for 1:1 pixel=texel=data mapping
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, textureSize, 0.0, textureSize);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0, 0, textureSize, textureSize);
// create FBO and bind it (that is, use offscreen render target)
GLuint fboId;
glGenFramebuffersEXT(1, &fboId);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
// create texture
GLuint textureId;
glGenTextures (1, &textureId);
glBindTexture(textureTarget, textureId);
// set texture parameters
glTexParameteri(textureTarget, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(textureTarget, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_T, GL_CLAMP);
// define texture with floating point format
glTexImage2D(textureTarget, 0, textureInternalFormat, textureSize, textureSize, 0, textureFormat, GL_FLOAT, 0);
// attach texture
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, textureTarget, textureId, 0);
// transfer data to texture
//glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
//glRasterPos2i(0, 0);
//glDrawPixels(textureSize, textureSize, textureFormat, GL_FLOAT, data);
glBindTexture(textureTarget, textureId);
glTexSubImage2D(textureTarget, 0, 0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, data);
// and read back
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glReadPixels(0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, result);
// print out results
printf("**********************\n");
printf("Data before roundtrip:\n");
printf("**********************\n");
for (int i = 0; i < arraySize; i++)
{
printf("%f, ", data[i]);
}
printf("\n\n\n");
printf("**********************\n");
printf("Data after roundtrip:\n");
printf("**********************\n");
for (int i = 0; i < arraySize; i++)
{
printf("%f, ", result[i]);
}
printf("\n");
// clean up
delete[] data;
delete[] result;
glDeleteFramebuffersEXT (1, &fboId);
glDeleteTextures (1, &textureId);
system("pause");
return 0;
}
I also read somewhere on the internet that ATI cards don't support luminance yet. Does anyone know if this is true?
This has nothing to do with luminance values; the problem is with you reading floating point values.
In order to read floating-point data back properly via glReadPixels, you first need to set the color clamping mode. Since you're obviously not using OpenGL 3.0+, you should be looking at the ARB_color_buffer_float extension. In that extension is glClampColorARB, which works pretty much like the core 3.0 verison.
here's what I found out:
1) If you use GL_LUMINANCE as texture format (and GL_LUMINANCE_FLOAT32_ATI GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), the glClampColor(..) (or glClampColorARB(..)) doesn't seem to work at all.
I was only able to see the values getting actively clamped/not clamped if I set the texture format to GL_RGBA. I don't understand why this happens, since the only glClampColor(..) limitation I heard of is that it works exclusively with floating-point buffers, which all chosen internal formats seems to be.
2) If you use GL_LUMINANCE (again, with GL_LUMINANCE_FLOAT32_ATI, GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), it looks like you must "correct" your output buffer dividing each of its elements by 3. I guess this happens because when you use glTexImage2D(..) with GL_LUMINANCE it internally replicates each array component three times and when you read GL_LUMINANCE values with glReadPixel(..) it calculates its values from the sum of the RGB components (thus, three times what you have given as input). But again, it stills give you clamped values.
3) Finally, if you use GL_RED as texture format (instead of GL_LUMINANCE), you don't need to pack your input buffer and you get your output buffer properly. The values are not clamped and you don't need to call glClampColor(..) at all.
So, I guess I'll stick with GL_RED, because in the end what I wanted was an easy way to send and collect floating-point values from my "kernels" without having to worry about offsetting array indexes or anything like this.
Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture