How to render super size texture in OpenGL - opengl

I use the API glGetIntegerv to gain the maximum size of viewport and texture. Both of them return 16K x 16K pixels.
Here is the codes:
GLint maxTexture,maxViewport;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTexture);
glGetIntegerv(GL_MAX_VIEWPORT_DIMS, &maxViewport);
However I need a larger size of texture to show a series of images of high resolution. I tried to set the viewport size to 32K x 32K,and the program runs successfully. It seems that the maximum of viewport size I got from the API glGetIntegerv is not quite right.
But I can't set the texture size to a value larger than 16K x 16K.
Maybe I should try to create more than one texture unit and each of them has the size of 16K x 16K.
Someone presents a application called Manual Whole Slide Imaging.
here is the hyperlink:
http://www.microvisioneer.com/
It seems that the super size texture has realized in this application.
So is OpenGL the right tool to resolve my problem or is there any other solution?
UPDATE:Just now I found that when I set the viewport to 32K x 32K and no error occurred,but its true size is still 16K x 16K.

Consider having a RGBA8 type. In C it looks like this:
uint8_t rgba[4] = {0x00, 0x44, 0x66, 0xff};
But you could represent it also with:
uint32_t rgba = 0x004466ff;
Similarly for type RGBA32:
uint32_t rgba[4] = {0x00000000, 0x44000000, 0x66000000, 0xff000000};
// or
uint128_t _rgba = 0x000000004400000066000000ff000000; // assuming there is a uint128_t...
HOWEVER you are allowed to treat those bits any way you want. You could:
uint128_t rgba[4] = {tex1rgba, tex2rgba, tex3rgba, tex4rgba};
In above example you have put tex1 rgba8 data in the red channel of the
opengl internal rgba32 texture, tex2 in the green channel and so on. Assuming the textures are edge-by-edge in real life in a 2x2 manner a texcoord (0.1,0.1) would map the rgba8 value in the red channel at texcoord 2*(0.1, 0.1). Texcoord (0.6, 0.7) map to the rgba8 in the alpha channel at 2*(0.6-0.5, 0.7-0.5), assuming the texture in the alpha channel is located diagonally to the texture in the red channel.
I´ve used this technique myself in the opposite direction for making my application support 16-bit luminance (L16 or R16) on platforms only supporting 8-bit channels. Essentially I loaded a 16-bit single channel texture into a 8-bit dual channel texture, and split it in the shader as high/low-word.
Notice you can´t use any texture filtering such as GL_LINEAR while doing this. You have to filter yourself. In my experience this is not a very big problem in practice.

Related

"Interleaved rendering" in fragment shader

P.S. Yes, I posted this question on Computer Graphics Stack Exchange. But posting there also in hope more people will see
Intro
I'm trying to render multi-channel images (more than 4 channels, for the purposes of feeding it to a Neural Network). Since OpenGL doesn't support it natively, I have multiple 4-channel render buffers, into which I render a corresponding portion of channels.
For example, I need multi-channel image of size 512 x 512 x 16, in OpenGL I have 4 render buffers of size 512 x 512 x 4. Now the problem is that the Neural Network expects the data with strides 512 x 512 x 16, i.e. 16 values of channels of one pixel are followed by 16 values of channels from the next pixel. However currently I can efficiently read my 4 render buffers via 4 calls to glReadPixels, basically making the data having strides 4 x 512 x 512 x 4. Manual reordering of data on the client side will not suffice me as it's too slow.
Main question
I've got an idea to render to a single 4-channel render buffer of size 512*4 x 512 x 4, because stride-wise it's equivalent to 512 x 512 x 16, we just treat a combination of 4 pixels in a row as a single pixel of 16-channel output image. Let's call it an "interleaved rendering"
But this requires me to magically adjust my fragment shader, so that every group of consequent 4 fragments would have exactly the same interpolation of vertex attributes. Is there any way to do that?
This bad illustration with 1 render buffer of 1024 x 512 4-channel image, is an example of how it should be rendered. With that I can in 1 call glReadPixels extract the data with stride 512 x 512 x 8
EDIT: better pictures
What I have now (4 render buffers)
What I want to do natively in OpenGL (this image is done in Python offline)
But this requires me to magically adjust my fragment shader, so that every group of consequent 4 fragments would have exactly the same interpolation of vertex attributes.
No, it would require a bit more than that. You have to fundamentally change how rasterization works.
Rendering at 4x the width is rendering at 4x the width. That means stretching the resulting primitives, relative to a square area. But that's not the effect you want. You need the rasterizer to rasterize at the original resolution, then replicate the rasterization products.
That's not possible.
From the comments:
It just got to me, that I can try to get a 512 x 512 x 2 image of texture coordinates from vertex+fragment shaders, then stitch it with itself to make 4 times wider (thus we'll get the same interpolation) and from that form the final image
This is a good idea. You'll need to render whatever interpolated values you need to the original size texture, similar to how deferred rendering works. So it may be more than just 2 values. You could just store the gl_FragCoord.xy values, and then use them to compute whatever you need, but it's probably easier to store the interpolated values directly.
I would suggest doing a texelFetch when reading the texture, as you can specify exact integer texel coordinates. The integer coordinates you need can be computed from gl_FragCoord as follows:
ivec2 texCoords = ivec2(int(gl_FragCoord.x * 0.25f), int(gl_FragCoord.y));

OpenGL color index in frag shader?

I have a large sprite library and I'd like to cut GPU memory requirements. Can I store textures on the gpu with only 1 byte per pixel and use that for an RGB color look up in a fragment shader? I see conflicting reports on the use of GL_R8.
I'd say this really depends on whether your hardware supports that texture format or not. How about skipping the whole issue by using a A8R8G8B8 texture instead? It would just be compressed, i.e. using a bit mask (or r/g/b/a members in glsl) to read "sub pixel" values. Like the first pixel is stored in alpha channel, second pixel in red channel, third pixel in green channel, etc.
You could even use this to store up to 4 layers in a single image (cutting max texture width/height); picking just one shouldn't be an issue.

Good way to deal with alpha channels in 8-bit bitmap? - OpenGL - C++

I am loading bitmaps with OpenGL to texture a 3d mesh. Some of these bitmaps have alpha channels (transparency) for some of the pixels and I need to figure out the best way to
obtain the values of transparency for each pixel
and
render them with the transparency applied
Does anyone have a good example of this? Does OpenGL support this?
First of all, it's generally best to convert your bitmap data to 32-bit so that each channel (R,G,B,A) gets 8 bits. When you upload your texture, specify a 32bit format.
Then when rendering, you'll need to glEnable(GL_BLEND); and set the blend function, eg: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This tells OpenGL to mix the RGB of the texture with that of the background, using the alpha of your texture.
If you're doing this to 3D objects, you might also want to turn off back-face culling (so that you see the back of the object through the front) and sort your triangles back-to-front (so that the blends happen in the correct order).
If your source bitmap is 8-bit (ie: using a palette with one colour specified as the transparency mask), then it's probably easiest to convert that to RGBA, setting the alpha value to 0 when the colour matches your transparency mask.
Some hints to make things (maybe) look better:
Your alpha channel is going to be an all-or-nothing affair (either 0x00 or 0xff), so apply some blur algorithm to get softer edges, if that's what you're after.
For texels (texture-pixels) with an alpha of zero (fully transparent), replace the RGB colour with the closest non-transparent texel. When texture coordinates are being interpolated, they wont be blended towards the original transparency colour from your BMP.
If your pixmap are 8-bit single channel they are either grayscale or use a palette. What you first need to do is converting the pixmap data into RGBA format. For this you allocate a buffer large enough to hold a 4-channel pixmap of the dimensions of the original file. Then for each pixel of the pixmap use that pixel's value as index into the palette (look up table) and put that color value into the corresponding pixel in the RGBA buffer. Once finished, upload to OpenGL using glTexImage2D.
If your GPU supports fragment shaders (very likely) you can do that LUT transformation in the shader: Upload the 8-bit pixmal as a GL_RED or GL_LUMINANCE 2D texture. And upload the palette as a 1D GL_RGBA texture. Then in the fragment shader:
uniform sampler2D texture;
uniform sampler1D palette_lut;
void main()
{
float palette_index = texture2D(texture,gl_TexCoord[0].st).r;
vec4 color = texture1D(palette_lut, palette_index);
gl_FragColor = color;
}
Blended rendering conflicts with the Z buffer algorithm, so you must sort your geometry back-to-front for things to look properly. As long as this affects objects at a whole this is rather simple, but it becomes tedious if you need to sort the faces of a mesh rendering each and every frame. A method to avoid this is breaking down meshes into convex submeshes (of course a mesh that's convex already can not be broken down further). Then use the following method:
Enable face culling
for convex_submesh in sorted(meshes, far to near):
set face culling to front faces (i.e. the backside gets rendered)
render convex_submesh
set face culling to back faces (i.e. the fronside gets rendered)
render convex_submesh again

How does one use clip() to perform alpha testing?

This is an HLSL question, although I'm using XNA if you want to reference that framework in your answer.
In XNA 4.0 we no longer have access to DX9's AlphaTest functionality.
I want to:
Render a texture to the backbuffer, only drawing the opaque pixels of the texture.
Render a texture, whose texels are only drawn in places where no opaque pixels from step 1 were drawn.
How can I accomplish this? If I need to use clip() in HLSL, how to I check the stencilbuffer that was drawn to in step 1, from within my HLSL code?
So far I have done the following:
_sparkStencil = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.GreaterEqual,
ReferenceStencil = 254,
DepthBufferEnable = true
};
DepthStencilState old = gd.DepthStencilState;
gd.DepthStencilState = _sparkStencil;
// Only opaque texels should be drawn.
DrawTexture1();
gd.DepthStencilState = old;
// Texels that were rendered from texture1 should
// prevent texels in texture 2 from appearing.
DrawTexture2();
Sounds like you want to only draw pixels that are within epsilon of full Alpha (1.0, 255) the first time, while not affecting pixels that are within epsilon of full Alpha the second.
I'm not a graphics expert and I'm operating on too little sleep, but you should be able to get there from here through an effect script file.
To write to the stencil buffer you must create a DepthStencilState that writes to the buffer, then draw any geometry that is to be drawn to the stencil buffer, then switch to a different DepthStencilState that uses the relevant CompareFunction.
If there is some limit on which alpha values are to be drawn to the stencil buffer, then use a shader in the first pass that calls the clip() intrinsic on floor(alpha - val) - 1 where val is a number in (0,1) that limits the alpha values drawn.
I have written a more detailed answer here:
Stencil testing in XNA 4

Accessing 32-bit Depth Buffer from fragment shader?

I'm trying to do the following technique for shadowing, I read it on the NVIDIA site and it seemed like a good technique. I would prefer it to calculating shadow volumes on the cpu because it seems more 'true' and I could use this one for soft-shadowing. :
1st pass:
Fill depth buffer from perspective of LIGHT0. Copy this depth buffer for second pass. (*)
2nd pass:
Render view from EYE, and for each fragment:
Get the XY location in the depth buffer stored in (*). Get the corresponding 32-bit value.
Calculate the distance to the light.
Match this distance to the stored depth buffer value.
If it is larger, then the fragment is drawn in glDisable(LIGHT0) mode, otherwise it is drawn with the light enabled. For this purpose I use two fragment shaders and fragments blend/switch between the two according to the comparison of the distance.
Now, I'd want to do the last steps in the fragment shader, for some reasons. One of them is that I want to take the distance into account for the 'effect" of the shadowing. In my game, if the distance to the obstructing object is small, it is safe to say that the shadow will be very "strict". If it is further away, global illumination kicks in more and the shadow is slighter. This is because it's a card game, this would not be the case for more complicated 'concave' shapes.
But, I'm new to openGL and I don't understand how to do any of the following:
How to access that 1st pass depth buffer in the fragment shader without copying it to a 2d texture. I assume that is not possible?
If copying the 32-bit depth buffer to a texture that has 8-bits in each R,G,B,A component and then re-assembling that value in the fragment shader is the most efficient thing I can do?
If there are cross-platform extensions I could use for this.
Thanks if someone can help me or give me some more ideas, I'm kind of stumped right now and my lack of good hardware and free time really makes for an exhausting process to debug/try everything out.
1st way is to use FBOs with a GL_DEPTH_COMPONENT texture attached to the GL_DEPTH_ATTACHMENT attachment.
The second way is to use glCopyTexImage2D again with a GL_DEPTH_COMPONENT texture.
FBOs are cross platform and available in almost every modern OpenGL implementation, you should have them available to you.
You are right: you need to create a 2D texture from the depth buffer values in order to use those values in the 2nd pass.
Concerning the texture itself, I think that copying from 32bits depth buffer to 8 bits RGBA will not use a cast to convert data: for a mid range value of the depth buffer (say 0x80000000), you will get half tone on R, G, B and A on your rgba texture:
RGBA[0] = 0x80;
RGBA[1] = 0x80;
RGBA[2] = 0x80;
RGBA[3] = 0x80;
Where you would have expected: (cast)
RGBA[0] = 0x80;
RGBA[1] = 0;
RGBA[2] = 0;
RGBA[3] = 0;
So, for the right format, I am not sure, but I would suggest you not to modify it during the copy, since you don't want to have a conversion overhead.