Is there a way to render points in XTK that scale with zoom? - xtk

Parts of a brain have been rendered with XTK [nightly] at: http://mindboggle.info/. however, if you zoom out, you can see that the red points/cubes don't scale.

Since XTK uses the gl_PointSize in the shaders to set the pointSize if specified with
X.object.setPointSize(size), the camera position (from zooming etc.) is not used to re-calculate the point size on interaction.
What you could do is the following as shown in the lesson 09:
Code: https://github.com/xtk/lessons/blob/master/09/index.html
Here is the live version: http://lessons.goxtk.com/09/
Note that the loading takes a little longer then if just rendering as points since the sphere meshes are actually created using CSG.
This will not work for the lineWidth so :)

Related

Rendering Point Sprites across cameras in cube maps

I'm rendering a particle system of vertices, which are then tessellated into quads in a geom shader, and textured/rendered as point sprites. Then they are scaled in size depending on how far away they are from the camera. I'm trying to render out every frame of my scene into cube maps. So essentially I place six cameras into my scene and point them in each direction for the face of the cube and save an image.
My point sprites are of varying sizes. When they near the border of one camera, (if they are large enough) they appear in two cameras simultaneously. Since point sprites are always facing the camera, this means that they are not continuous along the seam when I wrap my cube map back into 3d space. This is especially noticeable when the points are quite close to the camera, as the points are larger, and stretch further across into both camera views. I'm also doing some alpha blending, so this may be contributing to the problem as well.
I don't think I can just cull points that near the edge of the camera, because when I put everything back into 3d I'd think there would be strange areas where the cloud is more sparsely populated. Another thought I had would be to blur the edges of each camera, but I think this too would give me a weird blurry zone when I go back to 3d space. I feel like I could manually edit the frames in photoshop so they look ok, but this would be kind of a pain since it's an animation at 30fps.
The image attached is a detail from the cube map. You can see the horizontal seam where the sprites are not lining up correctly, and a slightly less noticeable vertical one on the right side of the image. I'm sure that my camera settings are correct, because I've used this same camera setup in other scenes and my cubemaps look fine.
Anyone have ideas?
I'm doing this in openFrameworks / openGL fwiw.
Instead of facing the camera, make them face the origin of the cameras? Not sure if this fixes everything, but intuitively I'd say it should look close to OK. Maybe this is already what you do, I have no idea.
(I'd like for this to be a comment, but no reputation)

OpenGL: reflection matrix issue

I'm currently working on a reflection of my OpenGL scene (basically consisting of a skycube and a small white cube inside of that). The reflection should happen in the xz-plane (with y=0). I've managed to render that into a FBO but currently there is some issue with the view or prespective matrix. The reflection is either as seen from the wrong view position, or it just inverts what is seen on the screen.
What I need, however, is a real mirror-like reflection. In most tutorials they say that you should just scale(1,-1,1) the view-matrix or use gl_scalef(1,-1,1) but none of this works for me - the effects are described above.
Below are two screenshots of the best I currently get, using the following code immediately before rendering the (to be mirrored) scene:
view = m_camera*mat4::scale(1,-1,1);
projection = m_cameraPerspective;
Corresponding original scene:
Reflected scene:
Note, how this is actually the reflected scene (e.g., the clouds from the top are visible instead of the water from the bottom - as in the original rendering) but the positions are somehow not correct, e.g., the white cube is not at the same position on screen, e.g., different distance to window border).
Please ignore the wrong colors. That's because I quick hacked a function that writes the pixel values into a tga file (from the rendered texture). When actually enabling rendering the texture on my mirror plane (which is currently disabled in both render steps), the colors are correct.
What's wrong with my reflection matrix?
As stated in the comment, this is actually correct.

Deferred Lighting | Point Lights Using Circles

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.

OpenGL Perspective Texture Flickering

I have a very simple OpenGL (3.2) setup, no lighting, perspective projection and a simple shader program (applies projection transformation and uses texture2D to read the color from the texture).
The camera is looking down the negative z-axis and I draw a few walls and pillars on the x-y-plane with a texture (http://i43.tinypic.com/2ryszlz.png).
Now I'm moving the camera in the x-y-plane and this is what it looks like:
http://i.imgur.com/VCrNcly.gif.
My question is now: How do I handle the flickering of the wall texture?
As the camera centers the walls, the view angle onto the texture compresses the texture for the screen, so one pixel on the screen is actually several pixels on the texture, but only one is chosen for display. From the information I have access to in the shaders, I don't see how to perform an operation which interpolates the required color.
As this looks like a problem nearly every 3D application should have, the solution is probably pretty simple (I hope?).
I can't seem to understand the images, but from what you are describing you seem to be looking for MIPMAPPING. Please google it, it's a very easy and very generally used concept. You will be able to use it by adding one or two lines to your program. Good Luck. I'd be more detailed but I am out of time for today.

Voxel Cone Traced Soft Shadows

I have recently implemented soft shadows using voxel cone tracing in OpenGL 4.3 by tracing a cone in the direction of the light and accumulating opacity values.
The key thing that I am trying to resolve or hide is the very voxelized shadowing effect as the occluded surface gets closer to the occluder, as well as hide the clear spots in the shadow due to surface voxelization. I am using low resolution voxels 64x64x64; however, even if I use higher resolution voxels, some of the low-res voxels at a higher mip-map level are still captured in the trace.
So here's what my first idea is: I want to be able to keep the softest parts of the shadow that is furthest away and replace the parts of the shadow that is closer to the occluder with a shadow map. The shadow map will fade as it is further away from each occluder and I will somehow blend it into with the cone traced shadows.
Can anyone think of a way to fade a shadow away based on distance from each object for a shadow-map and then have it blend smoothly into the cone-traced shadow?
Another idea I have would be to somehow ray-trace shadows onto surfaces that are closer to an occluder, but this would probably be too expensive.
Alternatively, I would welcome any other ideas to help improve my soft shadow algorithm.
I've also put up a video to show it in motion:
https://www.youtube.com/watch?v=SUiUlRojBpM
Still haven't found a way to resolve the shadowing issue.
I'm guessing the "clear spot" artifacts are occurring due to large voxel sizes only being partially filled with geometry: "accumulating opacity values.". How many samples are you taking when converting from rasterized pixels to voxels? If the sample volume/voxel volume is small then there could be issues with correctly rendering transparency - there will be noise indicated by lighter areas.
Also, are your voxels' transparency direction dependent? Based on the author's original paper. Directional dependence is important to ensure semi-opaque voxels are rendered correctly.
A quick picture to explain
"for a shadow-map and then have it blend smoothly into the cone-traced shadow?"
This seems like you are kind of shooting yourself in the foot. You get an even larger performance hit and get the disadvantages of both shadow mapping and voxel cone tracing. Voxel cone tracing is expensive but can give nice soft shadows and do global illumination. Shadow mapping is better at doing hard shadows and is faster for smaller scenes, but as you add more geometry you end up redrawing the same stuff multiple times, at least once for each light.
Great work btw. I came across your problem while doing preliminary research for my own DirectX implementation of voxel cone tracing.
[Edit]
I realized that I made a typo in the picture. The picture on the right should be 4/64, rather than 4/16. I forgot about the Z dimension.
Well, in this case you can do it by adding more lights. You can add more lights closer to the original one and then compose the shadow of a light with the shadows of a bunch of closer lights. That is the 'area light' effect.