OpenGL glutInit() : XOpenDisplay() causing segmentation fault - c++

I'm carrying out a project for virtualization of CUDA API. The project is based on QEMU hyper-visor. I'm using the latest version 2.6.0rc3. I have completed the core module and this question is regarding demoing it.QEMU 2.6.0rc3 has OpenGL support.
I ran the following program on the VM to test OpenGL support & it executed without any issue.
#include <GL/freeglut.h>
#include <GL/gl.h>
void renderFunction()
{
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
glBegin(GL_POLYGON);
glVertex2f(-0.5, -0.5);
glVertex2f(-0.5, 0.5);
glVertex2f(0.5, 0.5);
glVertex2f(0.5, -0.5);
glEnd();
glFlush();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE);
glutInitWindowSize(500,500);
glutInitWindowPosition(100,100);
glutCreateWindow("OpenGL - First window demo");
glutDisplayFunc(renderFunction);
glewInit();
glutMainLoop();
return 0;
}
I also used NVIDIA samples graphics demo named "simpleGL" available with CUDA 6.5 toolkit at https://developer.nvidia.com/cuda-toolkit-65. The demo uses OpenGL to depict waveforms and CUDA for underlying calculations to simulate it. When I run this demo program, a segmentation fault occurs at the glutInit() call. Here's the related code segment from the demo.
bool initGL(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE);
glutInitWindowSize(window_width, window_height);
glutCreateWindow("Cuda GL Interop (VBO)");
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMotionFunc(motion);
glutTimerFunc(REFRESH_DELAY, timerEvent,0);
// initialize necessary OpenGL extensions
glewInit();
if (! glewIsSupported("GL_VERSION_2_0 "))
{
fprintf(stderr, "ERROR: Support for necessary OpenGL extensions missing.");
fflush(stderr);
return false;
}
// default initialization
glClearColor(0.0, 0.0, 0.0, 1.0);
glDisable(GL_DEPTH_TEST);
// viewport
glViewport(0, 0, window_width, window_height);
// projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, (GLfloat)window_width / (GLfloat) window_height, 0.1, 10.0);
SDK_CHECK_ERROR_GL();
return true;
}
Here's the gdb call stack.
#0 0x00007ffff57d2872 in XOpenDisplay ()
from /usr/lib/x86_64-linux-gnu/libX11.so.6
#1 0x00007ffff76af2a3 in glutInit ()
from /usr/lib/x86_64-linux-gnu/libglut.so.3
#2 0x000000000040394d in initGL(int, char**) ()
#3 0x0000000000403b6a in runTest(int, char**, char*) ()
#4 0x00000000004037dc in main ()
According to my research the segmentation fault occurs when an attempt to open a window is made. My knowledge of internal working of OpenGL is very limited, some help in this regard is much appreciated. Thanks.

I'm carrying out a project for virtualization of CUDA API
Without support from NVidia I doubt you can do this on your own.
You're doing a few things that clash in a crass way:
First off you're running everything in a QEmu environment, which means, that, if you don't pass through the GPU via IOMMU virtualization into the VM there's nothing a CUDA runtime in there could work with. CUDA is designed to talk directly to the GPU.
Next you're using the Mesa OpenGL implementation inside the VM. Mesa has a dedicated backend to pass OpenGL commands through QEmu to a OpenGL implementation "outside" of it. This is more or less a remote procedure call and it piggybacks over the very same code paths that also implement indirect GLX via X11 transport.
CUDA internally links against libGL.so, but the libGL.so it expects to see is the one of the NVidia drivers, not some arbitrary libGL.so. Since libcuda.so and libGL.so come as a part of the same driver package, namely the NVidia drivers. There's certain internal "knowledge" about the particular libGL.so that the corresponding libcuda.so has and tries to use. Without the right libGL.so it won't work.
If you want to use CUDA in a VM (perfectly possible) you have to pass through the whole GPU into the VM. You can do this by loading the pci_stub kernel module, configuring the NVidia GPU as device to be attached to the stub, then launch the QEmu VM with pass through of the GPU device (actually it should also be possible to hot-plug passthrough it, but I never tried that). For this to work the nvidia kernel module must not have taken ownership of the GPU. So in case you have multiple NVidia GPUs and want to pass through only a subset of them, you have to attach those to the pci_stub before loading the nvidia kernel module. Then inside the VM you can use the NVidia drivers as usual.

Related

GLFW load functions

From Opengl wiki:
There are two phases of OpenGL initialization. The first phase is the
creation of an OpenGL context; the second phase is to load all of the
necessary functions to use OpenGL.
This boilerplate work is done with various OpenGL loading libraries;
So I downloaded GLFW and compiled the demo tests in the library. But find out that the framework merge the windows creation and the context creation into one function call createWindow, in which it first create a window and a context, it then load a few extension functions by initWGLExtensions.
So the context is setup now, without loading any other gl functions. The simple demo then start a msg loop to draw.
int main(void)
{
GLFWwindow* window;
glfwSetErrorCallback(error_callback);
if (!glfwInit())
exit(EXIT_FAILURE);
window = glfwCreateWindow(640, 480, "Simple example", NULL, NULL);
if (!window)
{
glfwTerminate();
exit(EXIT_FAILURE);
}
glfwMakeContextCurrent(window);
glfwSwapInterval(1);
glfwSetKeyCallback(window, key_callback);
while (!glfwWindowShouldClose(window))
{
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / (float) height;
glViewport(0, 0, width, height);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef((float) glfwGetTime() * 50.f, 0.f, 0.f, 1.f);
glBegin(GL_TRIANGLES);
glColor3f(1.f, 0.f, 0.f);
glVertex3f(-0.6f, -0.4f, 0.f);
glColor3f(0.f, 1.f, 0.f);
glVertex3f(0.6f, -0.4f, 0.f);
glColor3f(0.f, 0.f, 1.f);
glVertex3f(0.f, 0.6f, 0.f);
glEnd();
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
exit(EXIT_SUCCESS);
}
So all the rendering command functions comes on the fly? It actually declared in GL.h. So since the framework didn't load these functions from drivers. Where are these functions resident at?[Question]
And all the functions loaded by the GLFW is like
src\wgl_context.h(39):typedef PROC (WINAPI * WGLGETPROCADDRESS_T)(LPCSTR);
src\wgl_context.h(44):#define _glfw_wglGetProcAddress _glfw.wgl.opengl32.GetProcAddress
src\wgl_context.h(89): WGLGETPROCADDRESS_T GetProcAddress;
src\wgl_context.c(42): _glfw_wglGetProcAddress("wglGetExtensionsStringEXT");
src\wgl_context.c(44): _glfw_wglGetProcAddress("wglGetExtensionsStringARB");
src\wgl_context.c(48): _glfw_wglGetProcAddress("wglCreateContextAttribsARB");
src\wgl_context.c(52): _glfw_wglGetProcAddress("wglSwapIntervalEXT");
src\wgl_context.c(56): _glfw_wglGetProcAddress("wglGetPixelFormatAttribivARB");
src\wgl_context.c(289): _glfw.wgl.opengl32.GetProcAddress = (WGLGETPROCADDRESS_T)
src\wgl_context.c(290): GetProcAddress(_glfw.wgl.opengl32.instance, "wglGetProcAddress");
src\wgl_context.c(659): const GLFWglproc proc = (GLFWglproc) _glfw_wglGetProcAddress(procname);
Does this mean that I didn't need to load any other GL functions?[[Question]]
Just sort of confuse about the GL work flow.
UPDATE
Find out those gl function calls are linked to opengl32.lib. What does this mean? That I use the default 1.1 gl implementation by windows10? So, really I don't need to export these functions from a actually driver say nvoglv32.dll​, but use a static linked one in opengl32.lib?
That wording in the OpenGL wiki is a little bit unlucky. The details are a little bit more complicated. There are 3 things to a OpenGL environment:
the operating system ABI (application binary interface) contract
the OpenGL context
the window system integration
Historically the way OpenGL integrates with the OS is a crude hack, with a only poorly designed interface: Since OpenGL is an API designed to talk to the graphics driver it's not some kine of 3rd party library you could install. A certain set of its interfaces must be provided by the operating system. Which interfaces these are exactly are written down in the ABI contract. Of course each OS has it's own contract, and it may even change between versions.
To support newer versions of OpenGL the so called "extension mechanism" is defined, through which functions outside of the ABI contract can be loaded. Functions that are part of the ABI contract may or **may not* be available through this mechanism as well, so don't rely on that assumption.
In Windows (from Win-NT-4 and Win-95B onwards) the ABI contract assures, that a program will always find a conforming OpenGL-1.1 implementations. For sake of simplicity exactly the OpenGL-1.1 entry points are directly exposed by the interface stub library (opengl32.dll, with the symbol table being available through opengl32.lib), nothing less, nothing more. Device drivers then attach to that stub library with their end of the OpenGL implementation, that talks to the hardware. For all OpenGL contexts the OpenGL-1.1 stubs are invariant, i.e. they're the same for all contexts. Extended functionality OTOH are specific to each context. So for each OpenGL context that is created the extension function pointers have to be loaded individually and properly matched to the active context upon calling. It also means, that you first have to create a context (and make it active) before you can even attempt to load its extension functions.
In X11/GLX environments (e.g. Linux, the *BSDs, Solaris) the situation is as following: The ABI contract specifies that if OpenGL is available, then at least functions of OpenGL-1.2 must be exported by OpenGL implementation shared object. This is of particular note! While on Windows there's some vendor neutral stub, on X11/GLX the libGL.so your program dynamically loads is the implementation. Also the libGL.so may export far more symbols, potentially covering all the supported OpenGL features. As a programmer you should not rely on this though. Also in GLX it's asserted that all entry points are context invariant, i.e. you can load them once and reuse for all contexts.
On MacOS-X you get OpenGL through a framework. The ABI contract is per OS version, so the OpenGL functions available to a program are determined entirely by the specific OS version. There is an extension mechanism, but hardly serves any purpose; there are only a few Apple specific extensions exported, so don't even bother with it on the OS of Apple.

nsight - OpenGL 4.2 debugging incompatibility

Whenever I attempt to debug a shader in nvidia nsight I get the following incompatibility in my nvcompatlog.
glDisable (cap = 0x00008620)
glMatrixMode
glPushMatrix
glLoadIdentity
glOrtho
glBegin
glColor4f
glVertex2f
glEnd
glPopMatrix
This is confusing since I am using a 4.2 core profile and not using any deprecated or fixed function calls. At this stage I am just drawing a simple 2D square to the screen and can assure none of the functions listed above are being used.
My real concern is being new to SDL & GLEW I am not sure what functions they are using behind the scene. I have been searching around the web and have found others who are using SDL, GLEW, & Nvidia nsight. This leads me to believe I am overlooking something. Below is a shortened verison of how I am initialing SDL & GLW.
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_Window *_window;
_window = SDL_CreateWindow("Red Square", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED , 200, 200, SDL_WINDOW_OPENGL);
SDL_GLContext glContext = SDL_GL_CreateContext(_window);
glewExperimental = GL_TRUE;
GLenum status = glewInit();
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
In the implementation I have error checking pretty much after every call. I excluded it from the example to reduce the amount of clutter. All the above produce no errors and return valid objects.
After the initialization glewGetString(GLEW_VERSION) returns 4.2.0 NVIDIA 344.75, glewGetString(GLEW_VERSION) returns 1.11.0, and GLEW_VERSION_4_2 returns true.
Any idea on how I can used SDL & GLEW and not have either of these frameworks call deprecated functions?
** Edit **
I have been experiementing with the Dependency Walker here. Looking at the calls through Opengl32.dll none of what is listed is shown as a called module.
For anyone interested, Nsight captures all commands issued to the OpenGL server. Not just those issued through your application. If you have any FPS or recording software enabled, these tend to use deprecated methods drawing to the framebuffer. In my case it was Riva Tuner which displays the FPS on screen for any running games. Disabling it resolved my issue.

opengl why is my rectangle so big and tilted?

I have a opengl project where im just trying to draw a red rectangle to the screen, th problem is that 1) it's huge, it takes up almost the entire screen, and 2) its tilted. Im really new to opengl, so I don't understand the coordinate system, and what a few functions do, such as the glOrtho() function.
Here's the code:
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_QUADS);
glColor3f(1, 0, 0); // NOT SURE WHERE THIS STARTS, AND HOW THE COORDINATES WORK
glVertex2f(-1.0f, 1.0f);
glVertex2f( 1.0f, 1.0f);
glVertex2f( 1.0f,-1.0f);
glVertex2f(-1.0f,-1.0f);
glEnd();
glFlush();
}
void init()
{
glClearColor(0.0, 0.0, 0.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 10.0, 0, 10.0, -1.0, 1.0); //What does this do and how does it's coordinates work?
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(30.0, 1.0, 1.0, 1.0);
glEnable(GL_DEPTH_TEST);
}
int main(int argc, char *argv[])
{
glutInit(&argc, argv);
glutInitWindowSize(600, 600);
glutInitWindowPosition(250, 250);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE | GLUT_DEPTH);
glutCreateWindow("Model View");
glutDisplayFunc(display);
init();
glutMainLoop();
return 0;
}
Anyways, i'd prefer to make this into a learning experience, so please explain and link to things that would help! Thanks.
Procedure display is responsible for the actual drawing.
void display()
{
This line clears the buffer; the buffer is basically the memory are where the image is rendered; it is basically a matrix with width and height 600x600. To clear means to set every cell of the matrix to the same value. Every cell is a pixel and contains a color and a depth. With this call you are telling OpenGL to paint everything opaque black, and to reset the depth to 1. Why opaque black? Because of your call to glClearColor: the first three parameters are the red, green and blue component, and they can range between 0 and 1. 0,0,0 means black. For the last component you specified 1, which means opaque; 0 would be transparent. This last component is called alpha and is used when alpha blending is enabled. Why the clear depth is 1? Because 1 is the default, and you didn't call glClearDepth to override that value.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
This is telling OpenGL that you want to draw quadrilaterals.
glBegin(GL_QUADS);
You want these quadrilateral to be red (remember that the first component of a color is red).
glColor3f(1, 0, 0); // NOT SURE WHERE THIS STARTS, AND HOW THE COORDINATES WORK
Now you list the vertices of the quadrilateral (it's only one, four vertices); all vertices will be red because you never update the color by calling glColor3f; you can associate a different color to every vertex, the final result is usually very cute if you pick red (1,0,0), green (0,1,0), blue (0,0,1) and white (1,1,1); this quadrilateral should appear to screen as a square, because it is a square geometrically, your window is a square, and the camera (defined with glOrtho) has a square aspect (first four parameters of the call to glOrtho). If you didn't call glOrtho you would probably see only red, because the default OpenGL coordinates range between -1 and 1 and so you are covering the entire window.
glVertex2f(-1.0f, 1.0f);
glVertex2f( 1.0f, 1.0f);
glVertex2f( 1.0f,-1.0f);
glVertex2f(-1.0f,-1.0f);
This means that you are done with drawing.
glEnd();
Technically, it may be that OpenGL didn't send any of the commands you specified to the graphic card; commands may be enqueued for efficiency reason. Calling glFlush forces the command to be sent to the graphic card.
glFlush();
}
You wrote this function, init to initialize some of the OpenGL states, that you felt would remain stable across the application. In reality a real application like a game would have most of this stuff under display. For instance a game must continuously update the camera, as the player moves.
void init()
{
Here as we said before you are setting the clear color to be opaque black.
glClearColor(0.0, 0.0, 0.0, 1.0);
Here you are saying that the camera is not of a perspective type; basically things that are far away don't get smaller. It is similar to the view that an artificial satellite has of a city. In particular you are creating a camera which is not "centered" on the field of view: I recommend to use a call like glOrtho(-10.0, 10.0, -10.0, 10.0, -1.0, 1.0) for your first experiments. For a non perspective camera the coordinates that you specify here override the convention -1 to +1 that we mentioned above. Try to regulate the parameters such that your red square appears small, and centered.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 10.0, 0, 10.0, -1.0, 1.0); //What does this do and how does it's coordinates work?
Here you are basically positioning the camera relative to the square, or the square relative to the camera; there are infinite ways to see it. You are defining a geometrical transformation, and the reason why it is called MODELVIEW is that it is not uniquely something that alters the model (the square) or the view (the camera) but both, depending on the way you see it. However, your square appears rotated because you are calling glRotatef; remove it and the square should appear like a square.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(30.0, 1.0, 1.0, 1.0);
Depth test is a technique that uses the depth store in the buffer in order to remove hidden surfaces, for instance the back faces of a cube in a 3D scene. Your scene is 2D and you are drawing only a quad, so this is really non affecting your drawing.
glEnable(GL_DEPTH_TEST);
}
In the main you are interacting with glut, an optional subsystem which is not part of OpenGL but is useful to carry out some boring and tedious operations that only the operating system is authorized to perform.
int main(int argc, char *argv[])
{
First you must init glut.
glutInit(&argc, argv);
Then you define the window that will contain your rendering image.
glutInitWindowSize(600, 600);
glutInitWindowPosition(250, 250);
GLUT_RGB means that your window only supports red, green and blue, and doesn't have an alpha channel (this is very often the case). GLUT_DEPTH means that your buffer will be able to store the depth of each pixel. GLUT_SINGLE means that the window is single buffered, that is your commands will directly draw on the window; another option is double buffering, where you actually draw on a back buffer, and then you swap front and back buffer so that the rendered image appears suddenly, and not in a progressive fashion. Your scene is so simple that you shouldn't notice any difference between GLUT_SINGLE and GLUT_DOUBLE.
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE | GLUT_DEPTH);
Then you actually create the window.
glutCreateWindow("Model View");
You tell glut which function should be called to render the scene.
glutDisplayFunc(display);
Here you call your init function.
init();
This is a window loop, provided by glut. Most windowing system require a software loop in order to keep the window alive, and able to respond to clicks, drags, resize and keyboard strokes.
glutMainLoop();
return 0;
}
Long story short, several versions of OpenGL are available, and they can be programmed using several languages, and targeted to several platforms. The single most important difference between these versions is that some use a fixed function pipeline (FFP) whereas the newest versions have a programmable pipeline. Your program uses a fixed function pipeline. You should switch to a programmable pipeline whenever you can, because it is the modern way of doing computer graphics, and is much more flexible, even though it requires a little more programming, as the name suggests.
You should ignore the tutorials that I linked originally, I didn't immediately realized how outdated they were. You should go with the one recommended by datenwolf or, if you are interested in mobile development, you could consider learning OpenGL ES 2 (the 2 is important, because the previous version was fixed function). There is also a variant of OpenGL ES 2 for HTML5 and Javascript, called WebGL. You find the tutorials here, together with a ZIP file containing all the examples; I use their codebase whenever I need to check if I understood a new concept.
Cause you are looking at it funny :)
you have created a red square in (1,1) to (-1,-1) //display()
then said that the camera will look at it using orthogonal projection //glOrtho
(it creates a projection matrix, using a point to place the camera and giving it a direction)
and maybe a little tilted by glRotating the MODEL*VIEW*
PS You have to think about the gl commands as messages sent over to the gl subsystem and those messages alter the various it's states, what's in the scene, where the camera is, where the lights are, etc.. etc...

gwen + opengl can't see anything

I'm trying to use GWEN to draw some GUI elements on top of my opengl scene. It seems to have set up correctly but nothing from gwen is actually being drawn (visibly at least). I'm using a custom renderer which is essentially GWEN's stock opengl renderer but with a different function for loading textures. And OpenGL::Begin() and OpenGL::End() replaced with these:
void coRenderer::Begin()
{
glUseProgram(0);
glDisable(GL_DEPTH_TEST);
glDepthMask(0);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPushMatrix(); // Store The Projection Matrix
glLoadIdentity();
glOrtho(0, screen->w, screen->h, 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glActiveTexture(GL_TEXTURE0);
}
void coRenderer::End()
{
Flush();
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPopMatrix(); // Restore The Old Projection Matrix
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glDepthMask(1);
glEnable(GL_TEXTURE_2D);
}
the code for gwen's opengl renderer is here:
http://gwen.googlecode.com/svn/trunk/trunk/gwen/Renderers/OpenGL/OpenGL.cpp
BTW I'm using OpenGL 2.1 not 3.0+
Ah GWEN. That frustrating GUI library.
When I started using it, and integrating it into the engine we wrote in school, I had the same issue as you, using the stock OpenGL renderer however. Turned out it was being positioned wrong, calling glLoadIdentity() to reset the identity matrix seemed to resolve it.
The issue you are having, could well end up being the same as what I had, or there could be a problem with your custom OpenGL renderer. I'm not sure if you know much about GWEN, or how it works, but it runs on a single texture, that skins the GUI. Are you loading that in? Perhaps your texture loader isn't loading it correctly.
Try using your Debugger and stepping through your program. Areas of interest would be where you're attempting to load the GUI skin, where you're assigning the screen space that GWEN can use, and when you're actually attempting to render the GUI.

SDL with OpenGL (freeglut) crashes on call to glutBitmapCharacter

I have a program using OpenGL through freeglut under SDL. The SDL/OpenGL initialization is as follows:
// Initialize SDL
SDL_Init(SDL_INIT_VIDEO);
// Create the SDL window
SDL_SetVideoMode(SCREEN_W, SCREEN_H, SCREEN_DEPTH, SDL_OPENGL);
// Initialize OpenGL
glClearColor(BG_COLOR_R, BG_COLOR_G, BG_COLOR_B, 1.f);
glViewport(0, 0, SCREEN_W, SCREEN_H);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, SCREEN_W, SCREEN_H, 0.0f, -1.0f, 1.0f);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I've been using glBegin() ... glEnd() blocks without any trouble to draw primitives. However, in this program when I call any glutBitmapX function, the program simply exits without an error status. The code I'm using to draw text is:
glColor3f(1.f, 1.f, 1.f);
glRasterPos2f(x, y);
glutStrokeString(GLUT_BITMAP_8_BY_13, (const unsigned char*)"test string");
In previous similar programs I've used glutBitmapCharacter and glutStrokeString to draw text and it's seemed to work. The only difference being that I'm using freeglut with SDL now instead of just GLUT as I did in previous programs. Is there some fundamental problem with my setup that I'm not seeing, or is there a better way of drawing text?
Section 2, Initialization:
Routines beginning with the glutInit- prefix are used to initialize
GLUT state. The primary initialization routine is glutInit that should
only be called exactly once in a GLUT program. No non- glutInit-
prefixed GLUT or OpenGL routines should be called before glutInit.
The other glutInit- routines may be called before glutInit. The reason
is these routines can be used to set default window initialization
state that might be modified by the command processing done in
glutInit. For example, glutInitWindowSize(400, 400) can be called
before glutInit to indicate 400 by 400 is the program's default window
size. Setting the initial window size or position before glutInit
allows the GLUT program user to specify the initial size or position
using command line arguments.
Don't try to mix-n-match GLUT and SDL. It will end in tears and/or non-functioning event loops. Pick one framework and stick with it.
You have likely corrupted the heap.