Limitations of prebaked soft shadows - glsl

I'm working for 2 months on prebaking PCSS of nVidia with cubeTextures in WebGL.
I have succeeded in implementing the beast in real time. Prebaking it finally works too, but with some obvious artifacts.
To be brief, lower I highlight the main artifact I have and I explain why I get it, in a general case that everyone should encounter if he tries to prebake soft shadows.
To sum up this thread, every advice around this question could help me :
How can we deal with prebaked soft shadows artifacts ?
Let's simplify the situation by talking about soft shadows instead of focusing on PCSS. That means we're working with a shadowMap containing a visibility value ranging from 0 to 1 for each texel (whereas hard shadows generate shadowMaps with only 2 possible values : 0 or 1).
Since we're not in real time, we have to fix a point of view instead of using camera point of view at each frame. My soft shadows are computed from light point of view. To build them, I :
Compute a basic shadowMap with all the blockers (occluders)
In a second pass, containing only the receivers, I compute the soft shadows using the shadowMap and I store the visibility for each pixel
This gives me a precomputed shadowMap I can sample in real time.
The first limitation of prebaked soft shadows is that blockers cannot be part of receivers. Imagine a room full of occluders, then the only receiver would certainly be the room. Otherwise, you'll lose some shadows on the room mesh.
The reason is that we are stuck in light point of view, so we cannot see what's behind a blocker if we put it as receiver too.
This limitation leads to get a shadowMap with the good visibility value for the room, but obviously not the good one for the shadows on blockers.
I don't mind to get the right visibility value on blockers, I hoped to be able to use room's shadow as an approximation for the shadow on the blockers. But in practice, you'll get hard shadows on blockers because behind the blocker, visibility value for the room's shadow is totally black.
Here is a graphic to illustrate why this is happening.
In the top case, I only take the room as receiver. In the bottom case, I use the blocker as receiver too. You can easily see a problem appears in both case.
Because for the same texel of the shadowMap, we need two different visibility values : the point on the room is totally black whereas the point on the blocker is in penumbra.
I have many ideas to deal with this artifact :
Sending to the shaders a meshId for each mesh and evaluate this id to know if we are shadowing a blocker or not.
Make a PCSS pass for each blocker separately and mix all the shadowMaps at the end.
Make a PCSS pass for each receiver separately (taking into account blockers).
Precompute half the calculation and make other half in real time.
Precompute the shadowMap from several points of view, other than light point of view.
I failed for 1 & 2.
Idea 3 seems to be the same as idea 2.
4 is stupid, it's not precomputation anymore.
And I fear I won't be able to make idea 5 generic.
There is really little documentation on the subject. And most of documentations I found work with ideal scenes with no blocker shadowing another blocker, as if it wasn't an usual case.
So maybe someone here already faced this issue or is interested by the subject ? Hope it'll help other people after me too.
However, thank you for considering the issue.

Related

Can get a full view of my car model on a Ray-tracer

I currently have an ray-tracer that can read .obj models and then render the objects described on them. Until now, I was basically working with .obj models where the vertices where around the origin, generally closer than 10 of distance, at maximum being around 100.
Now, I downloaded a different model, where the vertices are far away from the origin, Always at least at hundreds of units from the origin, some vertices being about 5000 away in some axis.
The problem is that now I cannot focus the entire car!
One of my tests was with the distance from camera to origin of -3639.
And the result was this:
Then I step the camera away at -4639 and what was produced was this:
Changing my approach, decided to approach it, placing the camera at -2639
The result:
So at -2639 a I am being able to visualize the entire car but it does not fit in my field of view. At -3669 the light is already fading away by some reason.
I imagine that might be possible to see the full car proper lightened using a intermediate distance between -2669 and -3669 and also experimenting with the filed of view value, but there is something odd about the Light not covering the entire car at -3669 and I would like to find out the reason.
So I would appreciate suggestions about the cause of this issue and how to proceed in this kind of situation, how to focus the entire car.
Your question mentions you are changing the camera position. However, the images show the lighting area changing between the various cases. Just a spotlight in one case, and more of the car being lit in the other.
Most likely, in the third case, nothing of the car is lit, hence everything comes up black. Start by fixing the light staying the same when the camera moves, and see if it fixes your issue.
If you move the camera: It could help looking into the settings for the front and back clipping planes.
If you don't move the camera: The FOV show be larger if the object is larger. I would avoid doing this as this likely will lead to more problems when you read more than one object that are different.
Personally I would scale the input from the file. Ideally to some SI-unit that makes sense.

Create points (gl_Points) and show sequentially in modern OpenGL - PyOpenGL

I have been using OpenGL for a while now and continue to stay positive about making progress. However, I now have an issue that I have been unable to solve and it's taking a while. So, the issue is that I would like to:
Create points on screen sequentially (to appear every second for example)
Move these points independently
So far I have 2 methods on paper and that is to upload all vertices to a VBO and make each point visible (draw). The other method I had in mind was to create an empty VBO (set to NULL) and upload data per point.
Note, I want to transform these points independent of each other - can a uniform still be used? If so how can I set this up to draw point - transform - draw point - transform.
If I'm going about this completely wrong or there is a better, more improved method then please say so.
Many thanks!

Bullet physics multi-sphere body getting sucked through ground

I have made several attempts to fix this and read all I could find here/forum/google. I used a CCD treshold mush lower than my objects move speed and using a CCD radius much smaller than the objects half radius. The only thing this does is make the multisphere get stuck on seams. I also tried to set ERP/ERP2 to 0.9/1.0.
[EDIT] Ok, so after some more reading; CCD will not work if the sphere is already touching the ground and ERP only affeccts objectts with joints if I understand correctly.
The ground is a trimesh made in Blender and using the obtainStaticNodeShape to get the shape. I have tried to scale the mesh to get smaller polygons but even the smallest (for the game acceptable) size does not work, about 32k indices with 11k polys, 500x500 units, the multisphere has a radius of 0.45 units.
[EDIT] the multi-sphere is two spheres on top of each other and they are restricted to angular movement around the Y-axis only, so no rolling.
The sphere gets "sucked" fast through the ground it does not sink slowly. I tried to make the fixedtimestep smaller 1/420 with 64 substeps did not give any better results. This happens most often while ascending or descending a slope. My ground is gently sloped but an incline of 20% seems to be enough for it to fall through a lot but it can happen on level ground too, just not as often.
When I did my first test I used a big stretched out cube as ground and it worked well.
So my problem now is I don't even know why this is happening so I have no idea what to try next? Can anyone please give me a solution or some pointers.
Is there any use in increasing the multi-sphere size (for the game I can not increase more than 25-30%) I have not explicitly set any collision margins but I think this would just make my sphere float over the ground? Is there any profit in changing the ground from a static object to a kinematic?
Would it work to use a raytest from the sphere straight down and push it up if it is lower than the ground? I think not, why would it fall through if it could detect the ground in the first place..?
[EDIT: additional info]
There are quite a few occurrences of similar problems floating around on forums and also here at stack overflow. Most seem to be about very small objects. Small objects (>0.2m) is clearly not a good option for bullet unless you want to increase the number of simulation steps quite a lot. My problems does not seem to fall under this category since my smallest object is 0.9m in diameter?
I have now also done a debug draw to see the normals of the trimesh that I use as ground. I can not find any errors with the normals.
I also tried to increase the collission margins of the speheres but to no avail.
I further tried to use suggested settings:
((btDefaultCollisionConfiguration)world.collisionConfiguration).setPlaneConvexMultipointIterations(3,3); ((btDefaultCollisionConfiguration)world.collisionConfiguration).setConvexConvexMultipointIterations(3, 3);
No difference.
I did however read about big trimeshes not working very well for raycasting, my mesh is big 512x512 units but I am not sure if this could cause my object to fall through the mesh?
I also read that sphere shapes has problems with trimeshes, but again I am not sure if this would be my case? The sphere I am using is locked for rotation on all axes.
I have also tried using a btCapsule but it gave same results.. Would a cylinder work better?
[EDIT]
I have tried using a cylinder instead since sphere and capsule did not work. The cylinder is working a lot better. I have still got it to fall through once though. The clyinder was jerking around a lot before it went through where the sphere/capsule would just go through really fast and easy. Maybe this could be a clue of whats the underlaying problem? A cylinder is not the best for a character shape though..
An other possible reason could be if a triangle in the mesh has too long sides or a large ratio between sides. I found a few of those on a slope where my sphere always falls through. If this is indeed the problem can I do anything about it except manually editing the mesh in Blender?
As you can see there are a lot of these questions and a lot of possible answers and I have no idea which one corresponds to my case, someone with better insight giving some pointers would mean a lot, thanks!

OpenGL - sphere shrinking and expanding

I'm revising for an OpenGL exam and keep coming across this question on past papers. It's not something I've been taught and I was wondering if anyone could set me off in the right direction.
Sorry I haven't added what I have so far, there's not much because I don't really understand the question either.
"You wish to create a simple animation that shows a small red sphere shrinking and expanding. Specifically the radius oscillates sinusoidally between 0.3 and 0.5 in magnitude.
(i) Discuss the role of the glutIdleFunc in the animation.
(ii) Write the display method that performs the above animation;
assume the radius vector R is of type double and is declared with
global scope."
The glutIdleFunc documentation could set you off in a direction. Notice how it does calculations in the background, so a possible answer could be discussing how the animation behaves by setting or not setting (or simply leaving empty) that particular callback, respectively.
If you are allowed to use glutSolidSphere or glutWiredSphere the display method could be quite simple if you know the basics of OpenGL (assuming you've studied and attended class :). But if you have to use OpenGL 3.3 or 4.0+ you will probably have to think about coming up with an algorithm to first generate the vertices of the sphere (simpler) then the indices of the vertices (little bit trickier). There are numerous examples on the Internet and StackOverflow on how to do that, I do believe.
Good luck on your exam!

OpenGL - A way to display lot of points dynamically

I am providing a question regarding a subject that I am now working on.
I have an OpenGL view in which I would like to display points.
So far, this is something I can handle ;)
For every point, I have its coordinates (X ; Y ; Z) and a value (unsigned char).
I have a color array giving the link between one value and a color.
For example, 255 is red, 0 is blue, and so on...
I want to display those points in an OpenGL view.
I want to use a threshold value so that depending on it, I can modify the transparency value of a color depending on the value of one point.
I want also that the performance doesn't go bad even if I have a lot of points (5 billions in the worst case but 1~2 millions in a standard case).
I am now looking for the effective way to handle this.
I am interested in the VBO. I have read that it will allow some good performance and also that I can modify the buffer as I want without recalculating it from scratch (as with display list).
So that I can solve the threshold issue.
However, doing this on a million points dynamically will provide some heavy calculations (at least a pretty bad for loop), no ?
I am opened to any suggestions and I would like to discuss about any of your ideas !
Trying to display a billion points or more is generally (forgive the pun) pointless.
Even an extremely high resolution screen has only a few million pixels. Nothing you can do will get it to display more points than that.
As such, your first step is almost undoubtedly to figure out a way to restrict your display to a number of points that's at least halfway reasonable. OpenGL can (and will) oblige if you ask it to display more, but your monitor won't and neither will mine or much or anybody else's.
Not directly related to the OpenGL part of your question, but if you are looking at rendering massive point clouds you might want to read up on space partitioning hierarchies such as octrees to keep performance in check.
Put everything into one VBO. Draw it as an array of points: glDrawArrays(GL_POINTS,0,num). Calculate alpha in a pixel shader (using threshold passed as uniform).
If you want to change a small subset of points - you can map a sub-range of the VBO. If you need to update large parts frequently - you can use Transform Feedback to utilize GPU.
If you need to simulate something for the updates, you should consider using CUDA or OpenCL to run the update completely on the GPU. This will give you the best performance. Otherwise, you can use a single VBO and update it once per frame from the CPU. If this gets too slow, you could try multiple buffers and distribute the updates across several frames.
For the threshold, you should use a shader uniform variable instead of modifying the vertex buffer. This allows you to set a value per-frame which can be then combined with the data from the vertex buffer (for instance, you set a float minVal; and every vertex with some attribute less than minVal gets discarded in the geometry shader.)