I recently tried to make an .obj mesh loader in C++ with OpenGL and I am confronted to a strange problem.
I have a std::vector<Vector3f> that reprensents the coords of the vertices of the faces, and another one that represents its normals. In my Vector3f, there is a std::array<float,3> so I can preserve contiguity between elements.
// Vertex Pointer to triangle array
glVertexPointer(3,GL_FLOAT, sizeof(Vector3f), &(_triangles[0].data[0]));
// Normal pointer to normal array
glNormalPointer(GL_FLOAT,sizeof(Vector3f),&(_normals[0].data[0]));
When I compile the program on my school computers, it gives the good results, but when I compile it on my desktop computer, the lighting is strange, it's like all the faces are reflecting light in the camera, and so they appear all white.
Do you guys have any idea of what could be my problem ?
EDIT :
My computer is under ArchLinux, my window manager is Awesome, and this is written on a sticker on my pc
Intel Core i7-3632QM 2.2GHz with Turbo Boost up to 3.2GHz.
NVIDIA GeForce GT 740M
I don't know much about my school computers, but I think they are on Ubuntu.
I made it.
Of course, with such a little information, it would be difficult for anyone esle to find the answer.
This was based on sources given by school, and at a certain point, the shininess of the mesh was defined that way
glMaterialf (GL_FRONT, GL_SHININESS, 250);
However, in the Open GL documentation, it specified that
Only values in the range [0, 128] are accepted.
So I guess the different version of OpenGL reacted differently to this mistake :
my school's version of OpenGL probably decided to clamp the value of the shininess in [0,128]
my computer's version probably made saturated the shininess, which is why I had so bright results.
However, thank you very much for your help, and for taking time to read this post.
Related
I have an OpenGL test application that is producing incredibly unusual results. When I start up the application it may or may not feature a severe graphical bug.
It might produce an image like this:
http://i.imgur.com/JwPoDrh.jpg
Or like this:
http://i.imgur.com/QEYwhBY.jpg
Or just the correct image, like this:
http://i.imgur.com/zUJbwCM.jpg
The scene consists of one spinning colored cube (made of 12 triangles) with a simple shader on it that colors the pixels based on the absolute value of their model space coordinates. The junk faces appear to spin with the cube as though they were attached to it and often junk triangles or quads flash on the screen briefly as though they were rendered in 2D.
The thing I find most unusual about this is that the behavior is highly inconsistent, starting the exact same application repeatedly without me personally changing anything else on the system will produce different results, sometimes bugged, sometimes not, the arrangement of the junk faces produced isn't consistent either.
I can't really post source code for the application as it is very lengthy and the actual OpenGL calls are spread out across many wrapper classes and such.
This is occurring under the following conditions:
Windows 10 64 bit OS (although I have observed very similar behavior under Windows 8.1 64 bit).
AMD FX-9590 CPU (Clocked at 4.7GHz on an ASUS Sabertooth 990FX).
AMD 7970HD GPU (It is a couple years old and occasionally areas of the screen in 3D applications become scrambled, but nothing on the scale of what I'm experiencing here).
Using SDL (https://www.libsdl.org/) for window and context creation.
Using GLEW (http://glew.sourceforge.net/) for OpenGL.
Using OpenGL versions 1.0, 3.3 and 4.3 (I'm assuming SDL is indeed using the versions I instructed it to).
AMD Catalyst driver version 15.7.1 (Driver Packaging Version listed as 15.20.1062.1004-150803a1-187674C, although again I have seen very similar behavior on much older drivers).
Catalyst Control Center lists my OpenGL version as 6.14.10.13399.
This looks like a broken graphics card to me. Most likely some problem with the memory (either the memory itself, or some soldering problem). Artifacts like those you see can happen if for some reason setting the address for a memory operation does not fully settle or happen at all, before starting the read; that can happen due to a bad connection between the GPU and the memory (solder connections failed) or because the memory itself failed.
Solution: Buy new graphics card. You may try out what happens if you resolder it using a reflow process; there are some tutorials on how to do this DIY, but a proper reflow oven gives better results.
Whenever I try to render my terrain with point light's it only works on my Nvidia gpu and driver, and not the Intel integrated and driver. I believe the problem is in my code and a bug in the Nvidia gpu since I heard Nvidia's OpenGL implementations are buggy and will let you get away with things your not supposed to. And since I get no error's I need help debugging my shader's.
Link:
http://pastebin.com/sgwBatnw
Note:
I use OpenGL 2 and GLSL Version 120
Edit:
I was able to fix the problem on my one, to anyone with similar problems it's not because I used the regular transformation matrix because when I did that I set the normals w value to 0.0; The problem was that with the intel integrated graphics there is apparently a max number of array's in a uniform or max uniform size in general and I was going over that limit but it was deciding not to report it. Another thing wrong with this code was that I was doing implicit type conversion (dividing vec3's by floats) so I corrected those things and it started to work. Here's my updated code.
Link: http://pastebin.com/zydK7YMh
I made a program, that changes resolution, color depth,... and then it render simple texture on screen. It all works without any problem until I switch to 8b color depth. Then there appears problem of calling non-existing functions (function points to 0x00) like glCreateShader. It made me wonder and I got idea, which proved to be correct. Created context have really low version.
After calling glGetString(GL_VERSION) i recieved that context version was 1.1.0. With higher color depths it returns 4.4
Is there any reason for decreasing version? I looked through google and some of opengl.org pages, but I did not found anything about deprecating 8b color depth. Even Windows CAN switch to this color depth so there is no reason why OpenGL shouldn't be able to handle this.
Sure i can emulate it by decreasing number of colors, memory is not what I am concerned. I just want to know why is this happening. Program is prototype for lab experiments, so i need to have as many options as possible and this is just cutting one third away.
Last thing i should add is that program is written in C/C++ with Winapi and some WGL functions, but I think that this does not matter much.
Your graphics driver is falling back to the software implementation because no hardware accelerated pixel format matching your criteria could be found.
Most drivers will not give you hardware accelerated 8-bit per-pixel formats, especially if you request an RGB[A] (WGL_TYPE_RGBA_ARB) color mode.
Sure i can emulate it by decreasing number of colors, memory is not what I am concerned. I just want to know why is this happening.
To get an 8-bit format, you must use an indexed color mode (WGL_TYPE_COLORINDEX_ARB); paletted rendering. I suspect modern drivers will not even support that sort of thing unless they offer a compatibility profile (which rules out platforms like OS X).
The smallest RGB color depth you should realistically attempt is RGB555 or RGB565. 15/16-bit color is supported on modern hardware. Indexed color modes, on the other hand, are really pushing your luck.
So I'm at a point that I should begin lighting my flatly colored models. The test application is a test case for the implementation of only latest methods so I realized that ideally it should be implementing ray tracing (since theoretically, it might be ideal for real time graphics in a few years).
But where do I start?
Assume I have never done lighting in old OpenGL, so I would be going directly to non-deprecated methods.
The application has currently properly set up vertex buffer objects, vertex, normal and color input and it correctly draws and transforms models in space, in a flat color.
Is there a source of information that would take one from flat colored vertices to all that is needed for a proper end result via GLSL? Ideally with any other additional lighting methods that might be required to complement it.
I would not advise to try actual ray tracing in OpenGL because you need a lot hacks and tricks for that and, if you ask me, there is not a point in doing this anymore at all.
If you want to do ray tracing on GPU, you should go with any GPGPU language, such as CUDA or OpenCL because it makes things a lot easier (but still, far from trivial).
To illustrate the problem a bit further:
For raytracing, you need to trace the secondary rays and test for intersection with the geometry. Therefore, you need access to the geometry in some clever way inside your shader, however inside a fragment shader, you cannot access the geometry, if you do not store it "coded" into some texture. The vertex shader also does not provide you with this geometry information natively, and geometry shaders only know the neighbors so here the trouble already starts.
Next, you need acceleration data-structures to get any reasonable frame-rates. However, traversing e.g. a Kd-Tree inside a shader is quite difficult and if I recall correctly, there are several papers solely on this problem.
If you really want to go this route, though, there are a lot papers on this topic, it should not be too hard to find them.
A ray tracer requires extremely well designed access patterns and caching to reach a good performance. However, you have only little control over these inside GLSL and optimizing the performance can get really tough.
Another point to note is that, at least to my knowledge, real time ray tracing on GPUs is mostly limited to static scenes because e.g. kd-trees only work (well) for static scenes. If you want to have dynamic scenes, you need other data-structures (e.g. BVHs, iirc?) but you constantly need to maintain those. If I haven't missed anything, there is still a lot of research currently going on just on this issue.
You may be confusing some things.
OpenGL is a rasterizer. Forcing it to do raytracing is possible, but difficult. This is why raytracing is not "ideal for real time graphics in a few years". In a few years, only hybrid systems will be viable.
So, you have three possibities.
Pure raytracing. Render only a fullscreen quad, and in your fragment shader, read your scene description packed in a buffer (like a texture), traverse the hierarchy, and compute ray-triangles intersections.
Hybrid raytracing. Rasterize your scene the normal way, and use raytracing in your shader on some parts of the scene that really requires it (refraction, ... but it can be simultated in rasterisation)
Pure rasterization. The fragment shader does its normal job.
What exactly do you want to achieve ? I can improve the answer depending on your needs.
Anyway, this SO question is highly related. Even if this particular implementation has a bug, it's definetely the way to go. Another possibility is openCL, but the concept is the same.
As for 2019 ray tracing is an option for real time rendering but requires high end GPUs most users don't have.
Some of these GPUs are designed specifically for ray tracing.
OpenGL currently does not support hardware accelerated ray tracing.
DirectX 12 on windows does have support for it. It is recommended to wait a few more years before creating a ray tracing only renderer although it is possible using DirectX 12 with current desktop and laptop hardware. Support from mobile may take a while.
Opengl (glsl) can be used for ray (path) tracing. however there are few better options: Nvidia OptiX (Cuda toolkit -- cross platform), directx 12 (with Nvidia ray tracing extension DXR -- windows only), vulkan (nvidia ray tracing extension VKR -- cross platform, and widely used), metal (only works on MacOS), falcor (DXR, VKR, OptiX based framework), Intel Embree (CPU ray tracing only).
I found some of the other answers to be verbose and wordy. For visual examples that YES, functional ray tracers absolutely CAN be built using the OpenGL API, I highly recommend checking out some of the projects people are making on https://www.shadertoy.com/ (Warning: lag)
To answer the topic: OpenGL has no RTX extension, but Vulkan has, and interop is possible. Example here: https://github.com/nvpro-samples/gl_vk_raytrace_interop
As for the actual question: To light the triangles, there are tons of techniques, look up for "forward", "forward+" or "deferred" renderers. The technique to be used depends on you goal. The simplest and most good-looking these days, is image based lighting (IBL) with physically based shading (PBS). Basically, you use a cubemap and blur it more or less depending on the glossiness of the object. For a simple object viewer you don't need more.
I am displaying a texture that I want to manipulate without out affecting the image data. I want to be able to clamp the texel values so that anything below the lower value becomes 0, anything above the upper value becomes 0, and anything between is linearly mapped from 0 to 1.
Originally, to display my image I was using glDrawPixels. And to solve the problem above I would create a color map using glPixelMap. This worked beautifully. However, for performance reasons I have begun using textures to display my image. The glPixelMap approach no longer seems to work. Well that approach may work but I was unable to get it working.
I then tried using glPixelTransfer to set scales and bias'. This seemed to have some sort of effect (not necessarily the desired) on first pass, but when the upper and lower constraints were changed no effect was visible.
I was then told that fragment shaders would work. But after a call to glGetString(GL_EXTENSIONS), I found that GL_ARB_fragment_shader was not supported. Plus, a call to glCreateShaderObjectARB cause a nullreferenceexception.
So now I am at a loss. What should I do? Please Help.
What ever might work I am willing to try. The vendor is Intel and the renderer is Intel 945G. I am unfortunately confined to a graphics card that is integrated on the motherboard, and only has gl 1.4.
Thanks for your response thus far.
Unless you have a pretty old graphics-card, it's surprising that you don't have fragment-shader support. I'd suggest you try double-checking using this.
Also, are you sure you want anything above the max value to be 0? Perhaps you meant 1? If you did mean 1 and not 0 then are quite long-winded ways to do what you're asking.
The condensed answer is that you use multiple rendering-passes. First you render the image at normal intensity. Then you use subtractive blending (look up glBlendEquation) to subtract your minimum value. Then you use additive blending to multiply everything up by 1/(max-min) (which may need multiple passes).
If you really want to do this, please post back the GL_VENDOR and GL_RENDERER for your graphics-card.
Edit: Hmm. Intel 945G don't have ARB_fragment_shader, but it does have ARB_fragment_program which will also do the trick.
Your fragment-code should look something like this (but it's been a while since I wrote any so it's probably bugged)
!!ARBfp1.0
ATTRIB tex = fragment.texcoord[0]
PARAM cbias = program.local[0]
PARAM cscale = program.local[1]
OUTPUT cout = result.color
TEMP tmp
TXP tmp, tex, texture[0], 2D
SUB tmp, tmp, cbias
MUL cout, tmp, cscale
END
You load this into OpenGL like so:
GLuint prog;
glEnable(GL_FRAGMENT_PROGRAM_ARB);
glGenProgramsARB(1, &prog);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, prog);
glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, strlen(src), src);
glDisable(GL_FRAGMENT_PROGRAM_ARB);
Then, before rendering your geometry, you do this:
glEnable(GL_FRAGMENT_PROGRAM_ARB);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, prog);
colour4f cbias = cmin;
colour4f cscale = 1.0f / (cmax-cmin);
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, 0, cbias.r, cbias.g, cbias.b, cbias.a);
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, 1, cscale.r, cscale.g, cscale.b, cscale.a);
//Draw your textured geometry
glDisable(GL_FRAGMENT_PROGRAM_ARB);
Also see if the GL_ARB_fragment_program extension is supported. That extension supports the ASM style fragment programs. That is supposed to be supported in OpenGL 1.4.
It's really unfortunate that you're using such an ancient version of OpenGL. Can you upgrade with your card?
For a more modern OGL 2.x, this is exactly the kind of program that GLSL is for. Great documentation can be found here:
OpenGL Documentation
OpenGL Shading Langauge