SDL terminates when creating renderer - opengl

When I run this code:
#include <iostream>
#include <SDL.h>
#include <stdexcept>
#include <GL/gl3w.h>
int main() try {
if ( SDL_Init( SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_GAMECONTROLLER ) != 0 )
throw std::runtime_error{ "Could not initialize sdl" };
SDL_GL_SetAttribute( SDL_GL_CONTEXT_FLAGS, 0 );
SDL_GL_SetAttribute( SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE );
SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 3 );
SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 0 );
// Create window with graphics context
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
auto window_flags = (SDL_WindowFlags) ( SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE | SDL_WINDOW_ALLOW_HIGHDPI );
auto window = SDL_CreateWindow( "window", SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, 1280, 720, window_flags );
auto gl_context = SDL_GL_CreateContext( window );
SDL_GL_MakeCurrent( window, gl_context );
SDL_GL_SetSwapInterval( 1 ); // Enable vsync
if ( gl3wInit() != 0 )
throw std::runtime_error{ "Unable to initialize OpenGL loader" };
auto renderer = SDL_CreateRenderer( window, -1, 0 );
return 0;
}
catch ( std::exception& e ) {
std::cerr << e.what() << "\n";
return -1;
}
It produces the following output:
X Error of failed request: GLXBadDrawable
Major opcode of failed request: 151 (GLX)
Minor opcode of failed request: 5 (X_GLXMakeCurrent)
Serial number of failed request: 259
Current serial number in output stream: 259
Process finished with exit code 1
When I comment out the line that sets the opengl version, it runs fine.
What is causing this error and why does SDL_CreateRenderer terminate instead of returning a null pointer?

Related

SDL_GetWindowFlags() returns seemingly random values [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need my SDL2 program to know whether a window is fullscreen, and I thought I could get that info using SDL_GetWindowFlags(). By default I initialize my window with two flags: SDL_WINDOW_SHOWN and SDL_WINDOW_BORDERLESS, which are equal to 16 and 4 respectively. So I expected the function to return 20, but instead I get 532. And also sometimes 1556, which even changes to 532 during runtime after reinitializing the window a few times. 532 never changes to 1556 during runtime however.
How do these flags work?
bool init( int windowflags )
{
bool success = true;
if( SDL_Init( SDL_INIT_VIDEO ) < 0 )
{
printf( "Video initialization failed: %s\n", SDL_GetError() );
success = false;
}
else
{
gWindow = SDL_CreateWindow( "VIRGULE", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, WIN_W, WIN_H, SDL_WINDOW_SHOWN + windowflags );
if( gWindow == NULL )
{
printf( "Window could not be created: %s\n", SDL_GetError() );
success = false;
}
else
{
gRenderer = SDL_CreateRenderer( gWindow, -1, SDL_RENDERER_ACCELERATED + SDL_RENDERER_TARGETTEXTURE );
if( gRenderer == NULL )
{
printf( "Renderer could not be created: %s\n", SDL_GetError() );
success = false;
}
else
{
gTexture = SDL_CreateTexture( gRenderer, SDL_PIXELFORMAT_UNKNOWN, SDL_TEXTUREACCESS_TARGET, SCR_W, SCR_H );
if( gTexture == NULL )
{
printf( "Texture creation failed: %s\n", SDL_GetError() );
success = false;
}
}
}
}
printf( "%i\n", SDL_GetWindowFlags( gWindow ) );
//this is either prints 1556 or 532
return success;
}
Looks like your flag value is changing based on the states of SDL_WINDOW_INPUT_FOCUS and SDL_WINDOW_MOUSE_FOCUS. But it doesn't matter. Flag values change all the time. You shouldn't worry about the total value of the flags. You only need to know the value of the flag bit you are watching. The SDL_WINDOW_SHOWN and SDL_WINDOW_BORDERLESS flags are still set when the values are 532 and 1556 (if you look in binary).
Just grab the value of the bit flag:
int flags = SDL_GetWindowFlags( gWindow );
int window_shown = ( flags & SDL_WINDOW_SHOWN ) ? true : false;
int window_borderless = ( flags & SDL_WINDOW_BORDERLESS ) ? true : false;
int window_fullscreen = ( flags & SDL_WINDOW_FULLSCREEN ) ? true : false;
Here's a function you can use to see what flags are set based on the value:
void show_flags(int flags);
int main()
{
show_flags(20);
show_flags(532);
show_flags(1556);
return 0;
}
void show_flags(int flags) {
printf("\nFLAGS ENABLED: ( %d )\n", flags);
printf("=======================\n");
if(flags & SDL_WINDOW_FULLSCREEN) printf("SDL_WINDOW_FULLSCREEN\n");
if(flags & SDL_WINDOW_OPENGL) printf("SDL_WINDOW_OPENGL\n");
if(flags & SDL_WINDOW_SHOWN) printf("SDL_WINDOW_SHOWN\n");
if(flags & SDL_WINDOW_HIDDEN) printf("SDL_WINDOW_HIDDEN\n");
if(flags & SDL_WINDOW_BORDERLESS) printf("SDL_WINDOW_BORDERLESS\n");
if(flags & SDL_WINDOW_RESIZABLE) printf("SDL_WINDOW_RESIZABLE\n");
if(flags & SDL_WINDOW_MINIMIZED) printf("SDL_WINDOW_MINIMIZED\n");
if(flags & SDL_WINDOW_MAXIMIZED) printf("SDL_WINDOW_MAXIMIZED\n");
if(flags & SDL_WINDOW_INPUT_GRABBED) printf("SDL_WINDOW_INPUT_GRABBED\n");
if(flags & SDL_WINDOW_INPUT_FOCUS) printf("SDL_WINDOW_INPUT_FOCUS\n");
if(flags & SDL_WINDOW_MOUSE_FOCUS) printf("SDL_WINDOW_MOUSE_FOCUS\n");
if(flags & SDL_WINDOW_FULLSCREEN_DESKTOP) printf("SDL_WINDOW_FULLSCREEN_DESKTOP\n");
if(flags & SDL_WINDOW_FOREIGN) printf("SDL_WINDOW_FOREIGN\n");
}
More flags can be found here: https://wiki.libsdl.org/SDL_WindowFlags.
Output:
FLAGS ENABLED: ( 20 )
=======================
SDL_WINDOW_SHOWN
SDL_WINDOW_BORDERLESS
FLAGS ENABLED: ( 532 )
=======================
SDL_WINDOW_SHOWN
SDL_WINDOW_BORDERLESS
SDL_WINDOW_INPUT_FOCUS
FLAGS ENABLED: ( 1556 )
=======================
SDL_WINDOW_SHOWN
SDL_WINDOW_BORDERLESS
SDL_WINDOW_INPUT_FOCUS
SDL_WINDOW_MOUSE_FOCUS

Opencv video writer only writes 128byte files (os x el capitan)

So i tried several pieces of code now, all more or less similar, and they all only produce a 128 byte file. I want to record the webcam stream to file.
I don't believe this is a codec issue, i tried all of them and i still get only 128 bytes. Anyone know what the problem here is ? So far i only tried it on MacOS X.
For example below code:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main( int argc, char** argv ) {
CvCapture* capture;
capture = cvCreateCameraCapture(0);
assert( capture != NULL );
IplImage* bgr_frame = cvQueryFrame( capture );
CvSize size = cvSize(
(int)cvGetCaptureProperty( capture,
CV_CAP_PROP_FRAME_WIDTH),
(int)cvGetCaptureProperty( capture,
CV_CAP_PROP_FRAME_HEIGHT)
);
cvNamedWindow( "Webcam", CV_WINDOW_AUTOSIZE );
CvVideoWriter *writer = cvCreateVideoWriter( "vidtry.AVI",
CV_FOURCC('A','V','C','1'),
30,
size
);
while( (bgr_frame = cvQueryFrame( capture )) != NULL )
{
cvWriteFrame(writer, bgr_frame );
cvShowImage( "Webcam", bgr_frame );
char c = cvWaitKey( 33 );
if( c == 27 ) break;
}
cvReleaseVideoWriter( &writer );
cvReleaseCapture( &capture );
cvDestroyWindow( "Webcam" );
return( 0 );
}

Creating Seperate Context for Each GPU while having one display monitor

I want to create one GL Context for each GPU on Linux using the GLX. As nVIDIA Slides show, it is pretty simple and I just have to use ":0.0" for the first gpu and ":0.1" for the second one in XOpenDisplay function. I have tried it but it only works with ":0.0" but not with ":0.1". I have two gpus: GTX 980 and GTX 970. Also, as the xorg.conf shows the Xinerama is disabled. Furthermore, I only have one display monitor and it is connected to the GTX 980.
Do you have any idea about how to fix that? or what is missing?
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
#include <GL/gl.h>
#include <GL/glx.h>
#define GLX_CONTEXT_MAJOR_VERSION_ARB 0x2091
#define GLX_CONTEXT_MINOR_VERSION_ARB 0x2092
typedef GLXContext (*glXCreateContextAttribsARBProc)(Display*, GLXFBConfig, GLXContext, Bool, const int*);
// Helper to check for extension string presence. Adapted from:
// http://www.opengl.org/resources/features/OGLextensions/
static bool isExtensionSupported(const char *extList, const char *extension)
{
const char *start;
const char *where, *terminator;
/* Extension names should not have spaces. */
where = strchr(extension, ' ');
if (where || *extension == '\0')
return false;
/* It takes a bit of care to be fool-proof about parsing the
OpenGL extensions string. Don't be fooled by sub-strings,
etc. */
for (start=extList;;) {
where = strstr(start, extension);
if (!where)
break;
terminator = where + strlen(extension);
if ( where == start || *(where - 1) == ' ' )
if ( *terminator == ' ' || *terminator == '\0' )
return true;
start = terminator;
}
return false;
}
static bool ctxErrorOccurred = false;
static int ctxErrorHandler( Display *dpy, XErrorEvent *ev )
{
ctxErrorOccurred = true;
return 0;
}
int main(int argc, char* argv[])
{
Display *display = XOpenDisplay(":0.1");
if (!display)
{
printf("Failed to open X display\n");
exit(1);
}
// Get a matching FB config
static int visual_attribs[] =
{
GLX_X_RENDERABLE , True,
GLX_DRAWABLE_TYPE , GLX_WINDOW_BIT,
GLX_RENDER_TYPE , GLX_RGBA_BIT,
GLX_X_VISUAL_TYPE , GLX_TRUE_COLOR,
GLX_RED_SIZE , 8,
GLX_GREEN_SIZE , 8,
GLX_BLUE_SIZE , 8,
GLX_ALPHA_SIZE , 8,
GLX_DEPTH_SIZE , 24,
GLX_STENCIL_SIZE , 8,
GLX_DOUBLEBUFFER , True,
//GLX_SAMPLE_BUFFERS , 1,
//GLX_SAMPLES , 4,
None
};
int glx_major, glx_minor;
// FBConfigs were added in GLX version 1.3.
if ( !glXQueryVersion( display, &glx_major, &glx_minor ) ||
( ( glx_major == 1 ) && ( glx_minor < 3 ) ) || ( glx_major < 1 ) )
{
printf("Invalid GLX version");
exit(1);
}
printf( "Getting matching framebuffer configs\n" );
int fbcount;
GLXFBConfig* fbc = glXChooseFBConfig(display, DefaultScreen(display), visual_attribs, &fbcount);
if (!fbc)
{
printf( "Failed to retrieve a framebuffer config\n" );
exit(1);
}
printf( "Found %d matching FB configs.\n", fbcount );
// Pick the FB config/visual with the most samples per pixel
printf( "Getting XVisualInfos\n" );
int best_fbc = -1, worst_fbc = -1, best_num_samp = -1, worst_num_samp = 999;
int i;
for (i=0; i<fbcount; ++i)
{
XVisualInfo *vi = glXGetVisualFromFBConfig( display, fbc[i] );
if ( vi )
{
int samp_buf, samples;
glXGetFBConfigAttrib( display, fbc[i], GLX_SAMPLE_BUFFERS, &samp_buf );
glXGetFBConfigAttrib( display, fbc[i], GLX_SAMPLES , &samples );
printf( " Matching fbconfig %d, visual ID 0x%2x: SAMPLE_BUFFERS = %d,"
" SAMPLES = %d\n",
i, vi -> visualid, samp_buf, samples );
if ( best_fbc < 0 || samp_buf && samples > best_num_samp )
best_fbc = i, best_num_samp = samples;
if ( worst_fbc < 0 || !samp_buf || samples < worst_num_samp )
worst_fbc = i, worst_num_samp = samples;
}
XFree( vi );
}
GLXFBConfig bestFbc = fbc[ best_fbc ];
// Be sure to free the FBConfig list allocated by glXChooseFBConfig()
XFree( fbc );
// Get a visual
XVisualInfo *vi = glXGetVisualFromFBConfig( display, bestFbc );
printf( "Chosen visual ID = 0x%x\n", vi->visualid );
printf( "Creating colormap\n" );
XSetWindowAttributes swa;
Colormap cmap;
swa.colormap = cmap = XCreateColormap( display,
RootWindow( display, vi->screen ),
vi->visual, AllocNone );
swa.background_pixmap = None ;
swa.border_pixel = 0;
swa.event_mask = StructureNotifyMask;
printf( "Creating window\n" );
Window win = XCreateWindow( display, RootWindow( display, vi->screen ),
0, 0, 100, 100, 0, vi->depth, InputOutput,
vi->visual,
CWBorderPixel|CWColormap|CWEventMask, &swa );
if ( !win )
{
printf( "Failed to create window.\n" );
exit(1);
}
// Done with the visual info data
XFree( vi );
XStoreName( display, win, "GL 3.0 Window" );
printf( "Mapping window\n" );
XMapWindow( display, win );
// Get the default screen's GLX extension list
const char *glxExts = glXQueryExtensionsString( display,
DefaultScreen( display ) );
// NOTE: It is not necessary to create or make current to a context before
// calling glXGetProcAddressARB
glXCreateContextAttribsARBProc glXCreateContextAttribsARB = 0;
glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc)
glXGetProcAddressARB( (const GLubyte *) "glXCreateContextAttribsARB" );
GLXContext ctx = 0;
// Install an X error handler so the application won't exit if GL 3.0
// context allocation fails.
//
// Note this error handler is global. All display connections in all threads
// of a process use the same error handler, so be sure to guard against other
// threads issuing X commands while this code is running.
ctxErrorOccurred = false;
int (*oldHandler)(Display*, XErrorEvent*) =
XSetErrorHandler(&ctxErrorHandler);
// Check for the GLX_ARB_create_context extension string and the function.
// If either is not present, use GLX 1.3 context creation method.
if ( !isExtensionSupported( glxExts, "GLX_ARB_create_context" ) ||
!glXCreateContextAttribsARB )
{
printf( "glXCreateContextAttribsARB() not found"
" ... using old-style GLX context\n" );
ctx = glXCreateNewContext( display, bestFbc, GLX_RGBA_TYPE, 0, True );
}
// If it does, try to get a GL 3.0 context!
else
{
int context_attribs[] =
{
GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
GLX_CONTEXT_MINOR_VERSION_ARB, 0,
//GLX_CONTEXT_FLAGS_ARB , GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
None
};
printf( "Creating context\n" );
ctx = glXCreateContextAttribsARB( display, bestFbc, 0,
True, context_attribs );
// Sync to ensure any errors generated are processed.
XSync( display, False );
if ( !ctxErrorOccurred && ctx )
printf( "Created GL 3.0 context\n" );
else
{
// Couldn't create GL 3.0 context. Fall back to old-style 2.x context.
// When a context version below 3.0 is requested, implementations will
// return the newest context version compatible with OpenGL versions less
// than version 3.0.
// GLX_CONTEXT_MAJOR_VERSION_ARB = 1
context_attribs[1] = 1;
// GLX_CONTEXT_MINOR_VERSION_ARB = 0
context_attribs[3] = 0;
ctxErrorOccurred = false;
printf( "Failed to create GL 3.0 context"
" ... using old-style GLX context\n" );
ctx = glXCreateContextAttribsARB( display, bestFbc, 0,
True, context_attribs );
}
}
// Sync to ensure any errors generated are processed.
XSync( display, False );
// Restore the original error handler
XSetErrorHandler( oldHandler );
if ( ctxErrorOccurred || !ctx )
{
printf( "Failed to create an OpenGL context\n" );
exit(1);
}
// Verifying that context is a direct context
if ( ! glXIsDirect ( display, ctx ) )
{
printf( "Indirect GLX rendering context obtained\n" );
}
else
{
printf( "Direct GLX rendering context obtained\n" );
}
printf( "Making context current\n" );
glXMakeCurrent( display, win, ctx );
glClearColor( 0, 0.5, 1, 1 );
glClear( GL_COLOR_BUFFER_BIT );
glXSwapBuffers ( display, win );
sleep( 1 );
glClearColor ( 1, 0.5, 0, 1 );
glClear ( GL_COLOR_BUFFER_BIT );
glXSwapBuffers ( display, win );
sleep( 1 );
glXMakeCurrent( display, 0, 0 );
glXDestroyContext( display, ctx );
XDestroyWindow( display, win );
XFreeColormap( display, cmap );
XCloseDisplay( display );
return 0;
}
The reason it works with ":0.0" but not with ":0.1" is because they are the X display and screen numbers. ":0.0" means the first screen on the first display and ":0.1" means the second screen on the first display.
These numbers are for selecting which monitor you wish to display the window to and not which GPU you wish to use. As you have only one monitor attached you only have one screen so ":0.1" fails.
I believe the slides expect you to have two or more monitors attached, each driven by a different GPU.

Images and text not showing in SDL under Mac OSX

I got to compile, bundle and load resources under XCode 4.3 and SDL 1.2.15
I know resources are loading correctly because file handles are not null and no error is thrown.
I successfully load png's and ttf's, obtain and crop surfaces, and blit them.
But when I flip, the only thing I get to see are the lines I drew using SDL_Draw
I will put some bits of code, as I'm trying to keep an engine-ish structure so the code is everything but together.
Initialization:
void CEngine::Init() {
// Register SDL_Quit to be called at exit; makes sure things are cleaned up when we quit.
atexit( SDL_Quit );
// Initialize SDL's subsystems - in this case, only video.
if ( SDL_Init( SDL_INIT_EVERYTHING ) < 0 ) {
fprintf( stderr, "Unable to init SDL: %s\n", SDL_GetError() );
exit( 1 );
}
// Attempt to create a window with the specified height and width.
SetSize( m_iWidth, m_iHeight );
// If we fail, return error.
if ( m_pScreen == NULL ) {
fprintf( stderr, "Unable to set up video: %s\n", SDL_GetError() );
exit( 1 );
}
AdditionalInit();
}
and
void CTileEngine::AdditionalInit() {
SetTitle( "TileEngine - Loading..." );
PrintDebug("Initializing SDL_Image");
int flags = IMG_INIT_PNG;
int initted = IMG_Init( flags );
if( ( initted & flags ) != flags ) {
PrintDebug("IMG_Init: Failed to init required image support!");
PrintDebug(IMG_GetError());
// handle error
}
PrintDebug("Initializing SDL_TTF");
if( TTF_Init() == -1 ) {
PrintDebug("TTF_Init: Failed to init required ttf support!");
PrintDebug(TTF_GetError());
}
PrintDebug("Loading fonts");
font = TTF_OpenFont( OSXFileManager::GetResourcePath("Roboto-Regular.ttf"), 28 );
if( !font ) {
PrintDebug("Error loading fonts");
PrintDebug(TTF_GetError());
}
g_pGame = new CGame;
LoadGame( OSXFileManager::GetResourcePath( "test", "tmx") );
SetTitle( "TileEngine" );
PrintDebug("Finished AditionalInit()");
}
Main draw method
void CEngine::DoRender(){
++m_iFPSCounter;
if ( m_iFPSTickCounter >= 1000 ) {
m_iCurrentFPS = m_iFPSCounter;
m_iFPSCounter = 0;
m_iFPSTickCounter = 0;
}
SDL_FillRect( m_pScreen, 0, SDL_MapRGB( m_pScreen->format, 0, 0, 0 ) );
// Lock surface if needed
if ( SDL_MUSTLOCK( m_pScreen ) ){
if ( SDL_LockSurface( m_pScreen ) < 0 ){
return;
}
}
Render( GetSurface() );
// Render FPS
SDL_Color fpsColor = { 255, 255, 255 };
string fpsMessage = "FPS: ";
fpsMessage.append( SSTR(m_iCurrentFPS) );
SDL_Surface* fps = TTF_RenderText_Solid(font, fpsMessage.c_str(), fpsColor);
if( fps ) {
SDL_Rect destRect;
destRect.x = pDestSurface->w - fps->w;
destRect.y = pDestSurface->h - fps->h;
destRect.w = fps->w;
destRect.h = fps->h;
SDL_BlitSurface(fps, &fps->clip_rect, pDestSurface, &destRect);
SDL_FreeSurface(fps);
}
// Unlock if needed
if ( SDL_MUSTLOCK( m_pScreen ) )
SDL_UnlockSurface( m_pScreen );
// Tell SDL to update the whole gScreen
SDL_Flip( m_pScreen );
}
Image file loading
bool CEntity::VLoadImageFromFile( const string& sFile) {
if ( m_pSurface != 0 ){
SDL_FreeSurface( m_pSurface );
}
string nFile = string(OSXFileManager::APPNAME) + OSXFileManager::RESOURCEDIR + sFile;
SDL_Surface *pTempSurface;
pTempSurface = IMG_Load( nFile.c_str() );
m_sImage = sFile;
if ( pTempSurface == 0 ){
char czError[256];
sprintf( czError, "Image '%s' could not be opened. Reason: %s", nFile.c_str(), IMG_GetError() );
fprintf( stderr, "\nERROR: %s", czError );
return false;
} else {
pTempSurface = SDL_DisplayFormatAlpha(pTempSurface);
}
m_pSurface = pTempSurface;
return true;
}
Entity draw method
void CEntity::VRender( SDL_Surface *pDestSurface ) {
if ( ( m_pSurface == 0 ) || ( m_bVisible == false) || ( m_iAlpha == 0 ) ){
// If the surface is invalid or it's 100% transparent.
return;
}
SDL_Rect SDestRect;
SDestRect.x = m_iPosX;
SDestRect.y = m_iPosY;
SDestRect.w = m_pSurface->w;
SDestRect.h = m_pSurface->h;
if ( m_iAlpha != 255 )
SDL_SetAlpha( m_pSurface, SDL_SRCALPHA, m_iAlpha );
SDL_BlitSurface( m_pSurface, &m_pSurface->clip_rect, pDestSurface, &SDestRect );
}
I have checked and debugged million times and I don't get what's wrong here. As I told before, file loading seems to be OK.
But this part
void CTile::RenderGrid( SDL_Surface* pDestSurface ) {
Uint32 m_GridColor = SDL_MapRGB( pDestSurface->format, 0xFF, 0xFF, 0xFF );
Draw_Rect(pDestSurface, GetPosX(), GetPosY(), GetWidth(), GetHeight(), m_GridColor);
}
works like a charm.
I found out what was happening. Turns out that, from SDL version 1.1.18 SDL_Lock calls are recursive, so each lock must pair an unlock. That was not happening last time I used SDL, so I was not aware of it. Simply matching locks and unlocks did the job.

cl/gl interop : clCreateFromGLTexture2D with GLIntercept fails on AMD FUSION

I have a clgl interop executable which is making a call to clCreateFromGLTexture2D. It fails for that call...
clCreateFromGLTexture2D( 0x06691828, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, 1, CL_INVALID_GL_OBJECT ) = 0x00000000
I am using GLIntercept. here. So I am using Opengl32.dll generated by GLIntercept here...
Ok, it works on Nvidia GTX and runs standalone fine without intercept on AMD FUSION with AMD Radeon GPU. However, it fails when using the open source GLIntercept.
The clgl interop test code is posted there if you are interested in downloading and recreating the problem..
Does anyone have an idea how to fix it?
Here are some statements from the my debug log... If it helps..
I created an open issue/ticket on GLIntercept code website if you are interested in downloading the test sample clgl interop code which I am using for this...
glutInit( ) =
glutInitDisplayMode( 12 )
glutInitWindowSize( 320, 258 )
wglChoosePixelFormat( 38010929, 001EF950 PIXELFORMATDESCRIPTOR { nSize 28 nVersion 1 dwFlags 25 PFD_DOUBLEBUFFER PFD_DRAW_TO_WINDOW PFD_SUPPORT_OPENGL iPixelType PFD_TYPE_RGBA cColorBits cRedBits cRedShift cGreenBits cGreenShift cBlueBits cBlueShift cAlphaBits cAlphaShift cAccumBits cAccumRedBits cAccumGreenBits cAccumBlueBits cAccumAlphaBits cDepthBits cStencilBits } 28 ) = 0x2
wglGetCurrentContext( ) = 0x00000000
wglGetCurrentDC( ) = 0x00000000
glutCreateWindow( OpenGL-CL interraction! ) = 0x1
glClearColor( 0, 0, 0, 0 )
glEnable( b71 )
glEnable( de1 )
glGenTextures( 1, 0125B194 { 1} )
glBindTexture( de1, 1 )
glTexEnvi( 2300, 2200, 1e01 )
glTexParameteri( de1, 2801, 2600 )
glTexParameteri( de1, 2800, 2600 )
glTexImage2D( de1, 0, 8058, 100, 100, 0, 1908, 1401, 00C2E858 )
glBindTexture( de1, 0 )
clGetPlatformIDs( 0, NULL, 1 ) = CL_SUCCESS
clGetPlatformIDs( 1, 05744514, NULL ) = CL_SUCCESS
clGetDeviceIDs( 05744514, CL_DEVICE_TYPE_GPU, 1, 04516F40 , NULL ) = CL_SUCCESS
clGetDeviceInfo( 0x04516F40, CL_DEVICE_NAME, 400, BeaverCreek, NULL ) = CL_SUCCESS
clGetDeviceInfo( 0x04516F40, CL_DEVICE_EXTENSIONS, 400, cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_atomic_counters_32 cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_popcnt cl_khr_d3d10_sharing , NULL ) = CL_SUCCESS
wglGetCurrentContext( ) = 0x00020000
wglGetCurrentDC( ) = 0x38010929
wglGetCurrentContext( ) = 0x00020000
clCreateContext( 8200 0x20000 8203 0x38010929 4228 0x5744514, 1, 04516F40 , NULL, NULL, CL_SUCCESS ) = 0x06A30828
clCreateCommandQueue( 0x06A30828, 0x04516F40, 0, CL_SUCCESS ) = 0x06A69900
clCreateProgramWithSource( 0x06A30828, 1, C:\Users\inteltc\Documents\clgl_latest\Debug\clgl_1.program, CL_SUCCESS ) = 0x06A6B9F8
clBuildProgram( 0x06A6B9F8, 0, NULL, NULL, NULL, NULL ) = CL_SUCCESS
clCreateKernel( 0x06A6B9F8, kernel1, CL_SUCCESS ) = 0x045266E0
clCreateFromGLTexture2D( 0x06A30828, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, , 0, 1, CL_INVALID_GL_OBJECT ) = 0x00000000
I don't see that you called glTexImage2D for your texture object, and that basically leaves the texture width, height and mipmaps undefined, so it's pretty obvious why it fails.