What defines those OpenGL render dimension limits? - opengl

On my system, anything I draw with OpenGL outside of the range of around (-32700,32700) is not rendered (or folded back into the range, I can't figure out).
What defines those limits? Can they be modified?
Edit: Thanks all for pointing the right direction. It turned out my drawing code was using GLshort values. I replaced those by GLint values and I don't see those limits anymore.

I don't know what exactly you are doing, but this looks like a numeric overflow of a signed 16-bit integer (-32768..32767).

Are you calling glVertex3s to draw your vertices? As Malte Clasen pointed out, your vertices would overflow at 2^15-1.

Related

Why do we need primitive assembly after vextex post processing?

The output of Vertex post processing is in Window space, here after (based on standard pipeline ) primitive assembling comes, why?
I know primitive assembly happens in many stages like before clipping, but why do we need PA at this stage.
Primitive assembly happens before clipping, but the specification isn't clear on exactly where before it. In fact:
After a primitive is formed, it is clipped to a clip volume.
That's all it says about its location (outside of needing to do it for tessellation or GS's).

Does OpenGL have a default value for glStencilMask?

For interests sake, I'm curious if glStencilMask and glStencilMaskSeparate (and similar ones) have a default value, or if they're implementation defined, or if they're undefined.
I assume the wise thing to do is always set them from the get go, but I'm curious if they just "work" by coincidence or whether there is in fact a default value set.
Slightly related: I recall reading somewhere that on nvidia cards you don't have to set the active texture and it's at zero by default, but AMD cards require you to set it or else you can get junk results. This makes me wonder if it's the same thing (where stencil stuff just happens to work for me but just by chance) and by not setting it I've been playing a dangerous game or if this isn't the case.
I looked through the OpenGL spec [section 17.4.2] for the definitions of these functions, but couldn't resolve the answer to my question.
The initial state of glStencilMask is clearly specified. Initially, the mask is all 1's.
OpenGL 4.6 API Core Profile Specification - 17.4.2 Fine Control of Buffer Updates; page 522:
void StencilMask( uint mask );
void StencilMaskSeparate( enum face, uint mask );
control the writing of particular bits into the stencil planes.
The least significant s bits of mask, where s is the number of bits in the stencil buffer, specify an integer mask. Where a 1 appears in this mask, the corresponding bit in the stencil buffer is written; where a 0 appears, the bit is not written.
[...]
In the initial state, the integer masks are all ones, as are the bits controlling depth
value and RGBA component writing.

What exactly does glStencilMask() do? [duplicate]

This question already has an answer here:
How does mask affect stencil value according to stencil op?
(1 answer)
Closed 7 years ago.
I'm a beginner at OpenGL and while learning about stenciling this one function has been troubling me (glStencilMask).
I've been told that it can be used to enable or disable stenciling, how is this?
Why are hexadecimal values passed into this function?
Why are is the HEX value 0xff and 0x00 often passed specifically?
Does this function prevent drawing to the color buffer and/or the stencil buffer?
Would you kindly explain what its doing in simple terms?
Do you know how bitmasks work? That is what this is.
0xff is 11111111 in binary. That means GL can write to all 8 of the stencil bits.
0x00 is 00000000 in binary, and GL is not allowed to write to any bits when this mask is used.
Since the stencil buffer is effectively one large bitwise machine, it would serve you well to brush up on or learn these concepts in detail. If you are having trouble understanding why you would want to mask off certain bits, you may not be able to make effective use of the stencil buffer.
Masking off certain bits between passes will let you preserve the results stored in parts of the stencil buffer. Why you would want this is entirely application-specific, but this is how the stencil buffer works.
The stencil mask never disables the stencil buffer completely, you'd actually have to call glDisable (GL_STENCIL_TEST) for that. It simply enables or disables writes to portions of it.
On a final note, if you disable GL_STENCIL_TEST or GL_DEPTH_TEST that actually does two things:
Disables the test
Disables writing stencil / depth values
So, if for some reason, you ever wanted to write a constant depth or stencil value and you assumed that disabling the test would accomplish that -- it won't. Use GL_ALWAYS for the test function instead of disabling the test if that is your intention.

OpenGL ES 2 glGetActiveAtrib and non floats

I'm porting an engine from DX9/10/11 over to OpenGL ES 2. I'm having a bit of a problem with glGetActiveAttrib though.
According to the docs the type returned can only be one of the following:
The symbolic constants GL_FLOAT, GL_FLOAT_VEC2, GL_FLOAT_VEC3,
GL_FLOAT_VEC4, GL_FLOAT_MAT2, GL_FLOAT_MAT3, or GL_FLOAT_MAT4 may be
returned.
This seems t imply that you cannot have an integer vertex attribute? Am I missing something? Does this really mean you HAVE to implement every thing as floats? Does this mean I can't implement a colour as 4 byte values?
If so, this seems very strange as this would be a horrific waste of memory ... if not, can someone explain where I'm going wrong?
Cheers!
Attributes must be declared as floats in GLSL ES shader. But you can pass to them SHORT's or other supported values listed here. The conversion will happen automatically.

How many mipmaps does a texture have in OpenGL

Nevermind that I'm the one who created the texture in the first place and I should know perfectly well how many mipmaps I loaded/generated for it. I'm doing this for a unit test. There doesn't seem to be a glGetTexParameter parameter to find this out. The closest I've come is something like this:
int max_level;
glGetTexParameter( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, &max_level );
int max_mipmap = -1;
for ( int i = 0; i < max_level; ++i )
{
int width;
glGetTexLevelParameter( GL_TEXTURE_2D, i, GL_TEXTURE_WIDTH, &width );
if ( 0 == width )
{
max_mipmap = i-1;
break;
}
)
Anyhow, glGetTexLevelParameter() will return 0 width for a nonexistent mipmap if I'm using an NVidia GPU, but with Mesa, it returns GL_INVALID_VALUE, which leads me to believe that this is very much the Wrong Thing To Do.
How do I find out which mipmap levels I've populated a texture with?
The spec is kinda fuzzy on this. It says that you will get GL_INVALID_VALUE if the level parameter is "larger than the maximum allowable level-of-detail". Exactly how this is defined is not stated.
The documentation for the function clears it up a bit, saying that it is the maximum possible number of LODs for the largest possible texture (GL_MAX_TEXTURE_SIZE). Other similar functions like the glFramebufferTexture family explicitly state this as the limit for GL_INVALID_VALUE. So I would expect that.
Therefore, Mesa has a bug. However, you could work around this by assuming that either 0 or a GL_INVALID_VALUE error means you've walked off the end of the mipmap array.
That being said, I would suggest employing glTexStorage and never having to even ask the question again. This will forcibly prevent someone from setting MAX_LEVEL to a value that's too large. It's pretty new, from GL 4.2, but it's implemented (or will be very soon) across all non-Intel hardware that's still being supported.
It looks like there is currently no way to query how many mipmap levels a texture has, short of the OPs trial/error with #NicolBolas' invalid value check. For most cases I guess its performance wouldn't matter if the level 0 size doesn't change often.
However, assuming the texture does not have a limited number of levels, the specs give the preferred calculation (note the use of floor, and not ceiling as some examples give):
numLevels = 1 + floor(log2(max(w, h, d)))
What is the dimension reduction rule for each successively smaller mipmap level?
Each successively smaller mipmap level is half the size of the previous level, but if this half value is a fractional value, you should round down to the next largest integer.
...
Note that this extension is compatible with supporting other rules because it merely relaxes the error and completeness conditions for mipmaps. At the same time, it makes sense to provide developers a single consistent rule since developers are unlikely to want to generate mipmaps for different rules unnecessarily. One reasonable rule is sufficient and preferable, and the "floor" convention is the best choice.
[ARB_texture_non_power_of_two]
This can of course be verified with the OPs method, or in my case when I received a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT with glFramebufferTexture2D(..., numLevels).
Assuming you're building mipmaps in a standard way, the number of unique images will be something like ceil(log_2(max(width,height)))+1. This can be easily derived by noticing that mipmaps reduce image size by a factor of two each time until there is a single pixel.