GLFW load functions - opengl

From Opengl wiki:
There are two phases of OpenGL initialization. The first phase is the
creation of an OpenGL context; the second phase is to load all of the
necessary functions to use OpenGL.
This boilerplate work is done with various OpenGL loading libraries;
So I downloaded GLFW and compiled the demo tests in the library. But find out that the framework merge the windows creation and the context creation into one function call createWindow, in which it first create a window and a context, it then load a few extension functions by initWGLExtensions.
So the context is setup now, without loading any other gl functions. The simple demo then start a msg loop to draw.
int main(void)
{
GLFWwindow* window;
glfwSetErrorCallback(error_callback);
if (!glfwInit())
exit(EXIT_FAILURE);
window = glfwCreateWindow(640, 480, "Simple example", NULL, NULL);
if (!window)
{
glfwTerminate();
exit(EXIT_FAILURE);
}
glfwMakeContextCurrent(window);
glfwSwapInterval(1);
glfwSetKeyCallback(window, key_callback);
while (!glfwWindowShouldClose(window))
{
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / (float) height;
glViewport(0, 0, width, height);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef((float) glfwGetTime() * 50.f, 0.f, 0.f, 1.f);
glBegin(GL_TRIANGLES);
glColor3f(1.f, 0.f, 0.f);
glVertex3f(-0.6f, -0.4f, 0.f);
glColor3f(0.f, 1.f, 0.f);
glVertex3f(0.6f, -0.4f, 0.f);
glColor3f(0.f, 0.f, 1.f);
glVertex3f(0.f, 0.6f, 0.f);
glEnd();
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
exit(EXIT_SUCCESS);
}
So all the rendering command functions comes on the fly? It actually declared in GL.h. So since the framework didn't load these functions from drivers. Where are these functions resident at?[Question]
And all the functions loaded by the GLFW is like
src\wgl_context.h(39):typedef PROC (WINAPI * WGLGETPROCADDRESS_T)(LPCSTR);
src\wgl_context.h(44):#define _glfw_wglGetProcAddress _glfw.wgl.opengl32.GetProcAddress
src\wgl_context.h(89): WGLGETPROCADDRESS_T GetProcAddress;
src\wgl_context.c(42): _glfw_wglGetProcAddress("wglGetExtensionsStringEXT");
src\wgl_context.c(44): _glfw_wglGetProcAddress("wglGetExtensionsStringARB");
src\wgl_context.c(48): _glfw_wglGetProcAddress("wglCreateContextAttribsARB");
src\wgl_context.c(52): _glfw_wglGetProcAddress("wglSwapIntervalEXT");
src\wgl_context.c(56): _glfw_wglGetProcAddress("wglGetPixelFormatAttribivARB");
src\wgl_context.c(289): _glfw.wgl.opengl32.GetProcAddress = (WGLGETPROCADDRESS_T)
src\wgl_context.c(290): GetProcAddress(_glfw.wgl.opengl32.instance, "wglGetProcAddress");
src\wgl_context.c(659): const GLFWglproc proc = (GLFWglproc) _glfw_wglGetProcAddress(procname);
Does this mean that I didn't need to load any other GL functions?[[Question]]
Just sort of confuse about the GL work flow.
UPDATE
Find out those gl function calls are linked to opengl32.lib. What does this mean? That I use the default 1.1 gl implementation by windows10? So, really I don't need to export these functions from a actually driver say nvoglv32.dll​, but use a static linked one in opengl32.lib?

That wording in the OpenGL wiki is a little bit unlucky. The details are a little bit more complicated. There are 3 things to a OpenGL environment:
the operating system ABI (application binary interface) contract
the OpenGL context
the window system integration
Historically the way OpenGL integrates with the OS is a crude hack, with a only poorly designed interface: Since OpenGL is an API designed to talk to the graphics driver it's not some kine of 3rd party library you could install. A certain set of its interfaces must be provided by the operating system. Which interfaces these are exactly are written down in the ABI contract. Of course each OS has it's own contract, and it may even change between versions.
To support newer versions of OpenGL the so called "extension mechanism" is defined, through which functions outside of the ABI contract can be loaded. Functions that are part of the ABI contract may or **may not* be available through this mechanism as well, so don't rely on that assumption.
In Windows (from Win-NT-4 and Win-95B onwards) the ABI contract assures, that a program will always find a conforming OpenGL-1.1 implementations. For sake of simplicity exactly the OpenGL-1.1 entry points are directly exposed by the interface stub library (opengl32.dll, with the symbol table being available through opengl32.lib), nothing less, nothing more. Device drivers then attach to that stub library with their end of the OpenGL implementation, that talks to the hardware. For all OpenGL contexts the OpenGL-1.1 stubs are invariant, i.e. they're the same for all contexts. Extended functionality OTOH are specific to each context. So for each OpenGL context that is created the extension function pointers have to be loaded individually and properly matched to the active context upon calling. It also means, that you first have to create a context (and make it active) before you can even attempt to load its extension functions.
In X11/GLX environments (e.g. Linux, the *BSDs, Solaris) the situation is as following: The ABI contract specifies that if OpenGL is available, then at least functions of OpenGL-1.2 must be exported by OpenGL implementation shared object. This is of particular note! While on Windows there's some vendor neutral stub, on X11/GLX the libGL.so your program dynamically loads is the implementation. Also the libGL.so may export far more symbols, potentially covering all the supported OpenGL features. As a programmer you should not rely on this though. Also in GLX it's asserted that all entry points are context invariant, i.e. you can load them once and reuse for all contexts.
On MacOS-X you get OpenGL through a framework. The ABI contract is per OS version, so the OpenGL functions available to a program are determined entirely by the specific OS version. There is an extension mechanism, but hardly serves any purpose; there are only a few Apple specific extensions exported, so don't even bother with it on the OS of Apple.

Related

GL_DEPTH_TEST fail in middle of program

was following some tutorial on learnopengl.com, moving camera around and suddenly the GL_DEPTH_TEST fails.
GL_DEPTH_TEST WORKS AT FIRST, THEN FAILS
program looks like this
int main(){
glEnable(GL_DEPTH_TEST);
while (!glfwWindowShouldClose(window))
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 36); //some draw function
}
}
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
handler();
}
It actually fails in some other program as well (meaning other tutorials I am building). If I place the glEnable(GL_DEPTH_TEST) in the loop, then it will not fail, so I suspects that GL_DEPTH_TEST has somehow been disabled / failed during runtime.
Is there reason for this to happen?
how to prevent it?
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
is it hardware related? I am using Phenom X6 AMD CPU with some Radeon 6850 card on
my Windows PC.
EDIT:
I think my window was actually quite standard stuff
#include <GLFW/glfw3.h>
int main(){
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "LearnOpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
glewInit();
while(!glfwWindowShouldClose(window)){}
}
EDIT:
I used the function glIsEnabled() to check, indeed GL_DEPTH_TEST was disabled after sometime. This happens in 2 of the built program, one just panning around by key_press(change camera position), the other one rotates by glfwGetTime(). The line if(!glIsEnabled(GL_DEPTH_TEST)) std::cout << "time: " << glfwGetTime() << " no depth!!" << std::endl; gave output.
Does google map WebGL in the background has anything to do with that?
I guess I shall have to resort to putting GL_DEPTH_TEST in loop.
Is there reason for this to happen?
Normally not. OpenGL state is not supposed to suddenly change. However you have additional software installed, that injects DLLs and does "things to your OpenGL context. Programs like FRAPS (screen capture software), Stereoscopic/Virtual-Reality wrappers, Debugging-Overlays, etc.
how to prevent it?
Writing correct code ;) – and by that I mean the full stack: your program, the OS written by someone, the GPU drivers written by someone else. Bugs happen.
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
Yes. In fact you should always set all drawing related state anew with each drawing iteration. Not only for correctness reasons, but because with more advanced rendering techniques eventually you'll have to do this anyway. For example if you're going to render shadow maps you'll have to use FBOs, which require to set glViewport several times during rendering a frame. Or say you want to draw a minimap and/or HUD, then you'll have to disable depth testing in between.
If your program is structured like this from the very beginning things are getting much easier.
is it hardware related?
No. OpenGL is a software level specification and a conforming implementation must do whatever the specification says, regardless of the underlying hardware.
It may be your window declaration. Can you put your initialization for windows and opengl ?
EDIT
I can see you are declaring OpenGL 3.3, you have to put
glewExperimental = GL_TRUE;
before glewInit to make it works correctly.
Try to put it and control the eventual errors returned by glewInit :
GLuint err = glewInit();
Does google map WebGL in the background has anything to do with that?
no it shouldn't because OpenGL doesn't share data between process.

dll injection: drawing simple game overlay with opengl

I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.

Porting C++ OpenGL from windows to mac. What to do with GLKMatrix4MakePerspective?

I am trying to port a c++ program that uses SFML and OpenGL I wrote from Windows to Mac OS X. I am using g++ to compile on both platforms.
Here is my current code for calculating the projection matrix:
void perspectiveCalculate (int width, int height) {
// Prevent A Divide By Zero
if (height<1) height=1;
if (width<1) width=1;
glViewport(0, 0, width, height); // Reset The Current Viewport
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glLoadIdentity(); // Reset The Projection Matrix
//field of vision , width , height , near clipping , far clipping
//and calculate The Aspect Ratio Of The Window
gluPerspective(45.0f, (GLfloat)width/(GLfloat)height, 0.1f, 1000.0f);
glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix
glLoadIdentity(); // Reset The Modelview Matrix
}
However, g++ is saying that gluPerspective is deprecated, and I should be using GLKMatrix4MakePerspective instead. I found the manual page on the function but I'm not sure how to integrate it with the rest of my code. What should I do?
All your matrix code there is deprecated in modern OpenGL. You can keep using it on the Mac (warnings and all) if you have a GL 2.x context, but if you want to use GL 3.2 or newer you only get a Core Profile context, and for that you need to move to modern OpenGL code. (Ditto if you want to use OpenGL ES 2.0 or newer on mobile platforms.)
In modern OpenGL, the matrix setup functions you're using are all gone. The fixed-function pipeline they go with is replaced by a programmable pipeline... instead of telling OpenGL what model, view, and projection matrices you want and having it do the transformations for you, you do the transformations yourself in a GLSL vertex shader. The good news is that you can now leverage programmable shaders to do all kinds of effects not possible with the fixed-function pipeline. The bad news is you have to do the basic stuff yourself.
Part of doing that basic stuff is finding (or writing) a math library to generate all the matrices you'll be handing off to the vertex shader for doing your transformations. GLKMatrix4 is part of Apple's library for such things. GLKMatrix4MakePerspective takes almost the same set of arguments as gluPerspective (vertical FOV in radians, aspect ratio, near clipping distance, far clipping distance), but instead of setting fixed-function matrix state it returns a GLKMatrix4 data structure.
You then pass this data structure to a vertex shader via a uniform variable. Or, if you're working with GLKit anyway, you can look into [GLKBaseEffect][2], which provides an Objective-C interface roughly analogous to the old fixed-function pipeline.
Dealing with the entire process of moving to modern OpenGL is beyond the scope of one SO answer. Here's a few resources:
http://www.opengl-tutorial.org — Good overall tutorial. They use GLM for matrix math, but you can use GLKit math just as well
Migrating to OpenGL Core Profile — Apple Developer video
"OpenGL Game" Xcode template — this is for OpenGL ES on iOS, but you can run it in the simulator, and it provides a nice simple example of how to use GLKit to generate matrices and either hand them off to shaders or pass them to GLKBaseEffect. The GL part of this is pretty much the same on both iOS and OS X.

SDL with OpenGL (freeglut) crashes on call to glutBitmapCharacter

I have a program using OpenGL through freeglut under SDL. The SDL/OpenGL initialization is as follows:
// Initialize SDL
SDL_Init(SDL_INIT_VIDEO);
// Create the SDL window
SDL_SetVideoMode(SCREEN_W, SCREEN_H, SCREEN_DEPTH, SDL_OPENGL);
// Initialize OpenGL
glClearColor(BG_COLOR_R, BG_COLOR_G, BG_COLOR_B, 1.f);
glViewport(0, 0, SCREEN_W, SCREEN_H);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, SCREEN_W, SCREEN_H, 0.0f, -1.0f, 1.0f);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I've been using glBegin() ... glEnd() blocks without any trouble to draw primitives. However, in this program when I call any glutBitmapX function, the program simply exits without an error status. The code I'm using to draw text is:
glColor3f(1.f, 1.f, 1.f);
glRasterPos2f(x, y);
glutStrokeString(GLUT_BITMAP_8_BY_13, (const unsigned char*)"test string");
In previous similar programs I've used glutBitmapCharacter and glutStrokeString to draw text and it's seemed to work. The only difference being that I'm using freeglut with SDL now instead of just GLUT as I did in previous programs. Is there some fundamental problem with my setup that I'm not seeing, or is there a better way of drawing text?
Section 2, Initialization:
Routines beginning with the glutInit- prefix are used to initialize
GLUT state. The primary initialization routine is glutInit that should
only be called exactly once in a GLUT program. No non- glutInit-
prefixed GLUT or OpenGL routines should be called before glutInit.
The other glutInit- routines may be called before glutInit. The reason
is these routines can be used to set default window initialization
state that might be modified by the command processing done in
glutInit. For example, glutInitWindowSize(400, 400) can be called
before glutInit to indicate 400 by 400 is the program's default window
size. Setting the initial window size or position before glutInit
allows the GLUT program user to specify the initial size or position
using command line arguments.
Don't try to mix-n-match GLUT and SDL. It will end in tears and/or non-functioning event loops. Pick one framework and stick with it.
You have likely corrupted the heap.

OpenGL deprecated functions and gluPerspective and Transform

I am new to OpenGL and I am still experimenting with basic shapes. I sometimes find many functions like glEnd and many more, that are not mentioned in the OpenGL 3+ documentation. Were they replaced by other functions? Or do I have to write them manually?
Is there a tutorial online that uses OpenGL 3+?
As for " gluPerspective" I have read that it isn't used in Opengl 3+. Isn't it supposed to be a separate function in GLUT? what does it has to do with OpenGL 3+? Last, what does Transform( Width, Height ); do? (I found it in some sample code I downloaded, and I can't find it in GLUT or OpenGL).
here is the code:
GLvoid Transform(GLfloat Width, GLfloat Height)
{
glViewport(00, 00, Width, Height); /* Set the viewport */
glMatrixMode(GL_PROJECTION); /* Select the projection matrix */
glLoadIdentity(); /* Reset The Projection Matrix */
gluPerspective(20.0,Width/Height,0.1,100.0); /* Calculate The Aspect Ratio Of The Window */
glMatrixMode(GL_MODELVIEW); /* Switch back to the modelview matrix */
}
/* A general OpenGL initialization function. Sets all of the initial parameters. */
GLvoid InitGL(GLfloat Width, GLfloat Height)
{
glClearColor(0.0, 0.0, 0.0, 0.0); /* This Will Clear The Background Color To Black */
glLineWidth(2.0); /* Add line width, ditto */
Transform( Width, Height ); /* Perform the transformation */
}
/* The function called when our window is resized */
GLvoid ReSizeGLScene(GLint Width, GLint Height)
{
if (Height==0) Height=1; /* Sanity checks */
if (Width==0) Width=1;
Transform( Width, Height ); /* Perform the transformation */
}
I sometimes find many functions like glEnd and many more, that are not mentioned in the OpenGL 3+ documentation. Were they replaced by other functions?
They have been completely removed, since their workings doesn't reflect well with how modern graphics systems work on both the hardware and the software side. glBegin(…) and glEnd() form the surroundings of the so called immediate mode: Every call causes an operation. This reflects how early graphics systems were built, some 20 years ago.
Today one prepares batches of data, transfers them to GPU memory and triggers batch drawings with a single drawing call. OpenGL does this through vertex arrays and vertex buffer objects (VBOs). Vertex arrays have been around since OpenGL-1.1 (1996), and the VBO API is founded on vertex arrays, so for any reasonable program VBO support was added easily.
Or do I have to write them manually? Is there a tutorial online that uses OpenGL 3+?
It depends on the function in question. For example the whole texture environment, combiners have been removed. Just like the matrix manipulation functions and the whole lighting interface.
What they did and configured is now done through shaders and uniforms. Since you're expected to supply shaders one might say, you're expected to implement this yourself. OTOH you'll quickly find out, that often writing a shader is easier and more concise, than fiddling with large numbers of OpenGL parameter setting calls. Also once you've progressed far enough you'll hardly miss the matrix manipulation functions. Every serious application dealing with 3D graphics maintains the transformation matrices itself; be it for enhanced flexibilty or simply because those matrices are required in other places, too, e.g. some physics simulation.
As for " gluPerspective" I have read that it isn't used in Opengl 3+. Isn't it supposed to be a separate function in GLUT? what does it has to do with OpenGL 3+? Last, what does Transform( Width, Height ); do? (I found it in some sample code I downloaded, and I can't find it in GLUT or OpenGL).
gluPerspective is part of GLU. GLU is a companion library of OpenGL Utility functions, that used to ship with OpenGL-1.1. However it is not part of the OpenGL specification and completely optional.
GLUT is something else again. It's a simplicistic framework for quick and dirty setup of a OpenGL window and context, offering some minimalistic input API. Also it's no longer actively maintained. Personally I recommend not using it. If you must use a GLUT API, use FreeGLUT. Or better yet, don't GLUT at all, use a toolkit like Qt, GTK or a framework like GLFW or SDL.
Were they replaced by other functions?
No.
Or do I have to write them manually?
For old-style immediate-mode geometry submission you'll have to make your own work-alike. The matrix stack has a replacement.
Is there a tutorial online that uses OpenGL 3+?
At least one.