I need to make a feature in WebGL, where I have horizontal list of meshes (example 20) and I want to show only 3 and 2 at edge are fade in/fade out. And it slowly animates from one side no another (sliding to left or right).
Here is a sample code, how it is done using CSS. (https://jsfiddle.net/jqsLh4vg/91/)
What would be possible options to have this kind of effect?
My current approach, what I am trying (could be not the most performant solution):
Calculate visible area
Calculate how much visible area (in percentage) each mesh is visible in visible area on each render
Draw gradient using frag shader at the edges (here it is were I stucked and can't figure out the solution)
But maybe there is easier approach for this kind of effect? (Imagine if I have 100 elements and I need to manually calculate their visible area on each render time).
Related
I'm working on an alpha blending effect for which I need to be able to exclude a rectangular area of varying size and position on the screen from being rendered to, basically the opposite of a what a SDL_Viewport does. I can't use occlusion methods like SDL_RenderClear or SDL_RenderFillRect for this since that would interfere with the effect I'm going for, I actually need this area to only be rendered to once per frame.
Is there a better solution than having 4 constantly updated SDL_Rects acting as a frame of the exclusion zone to simulate something like a negative SDL_Viewport?
I have some 64x64 sprites which work fine (no flicker/shuffling) when I move my camera normally. But as soon as I change the camera.zoom (was supposed to be a major mechanic in my game) level away from 1f the sprites flicker every time you move.
For example changing to 0.8f:
Left flicker:
One keypress later: (Right flicker)
So when you move around it's really distracting for gameplay when the map is flickering... (however slightly)
The zoom is a flat 0.8f and I'm currently using camera.translate to move around, I've tried casting to (int) and it still flickered... My Texture/sprite is using Nearest filtering.
I understand zooming may change/pixelate my tiles but why do they flicker?
Edit
For reference here is my tilesheet:
It's because of the nearest filtering. Depending on amount of zoom, certain lines of artwork pixels will straddle lines of screen pixels so they get drawn one pixel wider than other lines. As the camera moves, the rounding works out differently on each frame of animation so that different lines are drawn wider on each frame.
If you aren't going for a retro low-res aesthetic, you could use linear filtering with mip maps (MipMapLinearLinear or MipMapLinearNearest). Then start with larger resolution art. The first looks better if you are smoothly transitioning between zoom levels, with a possible performance impact.
Otherwise, you could round the camera's position to an exact multiple of the size of a pixel in world units. Then the same enlarged columns will always correspond with the same screen pixels, which would cut down on perceived flickering considerably. You said you were casting the camera translation to an int, but this requires the art to be scaled such that one pixel of art exactly corresponds with one pixel of screen.
This doesn't fix the other problem, that certain lines of pixels are drawn wider so they appear to have greater visual weight than similar nearby lines, as can be seen in both your screenshots. Maybe one way to get round that would be to do the zoom a secondary step, so you can control the appearance with an upscaling shader. Draw your scene to a frame buffer that is sized so one pixel of texture corresponds to one world pixel (with no zoom), and also lock your camera to integer locations. Then draw the frame buffer's contents to the screen, and do your zooming at this stage. Use a specialized upscaling shader for drawing the frame buffer texture to the screen to minimize blurriness and avoid nearest filtering artifacts. There are various shaders for this purpose that you can find by searching online. Many have been developed for use with emulators.
I just implemented a light system in my engine. In the following screenshot you can see a light (the yellow square) in action:
Take into account that on top of the light illuminating the scenario, its also implemented a FOV which will occlude anything outside your field of view. Thats why the left part of the shadow seems so off.
As you can see the light's shadows are pretty "hard", as they won't even illuminate one bit of the area outside its direct reach.
In order to make the lights look better, I applied a filter to them, which pretty much limits the range to be illuminated, and also iluminate the area slightly within this limit:
In the big yellow circle you can see how the area is illuminated even if no direct light reaches it.
This solution however comes with some undesirable side effects. As you can see in the following screenshot, even if no light at all reaches an area, it will be illuminated if its too close to the light source:
I was wondering if there is any way to achieve what im trying to do by using shaders properly.
The main problem that I encounter comes from how I draw these shadows.
1) First I take the structures within the light's range.
At this point, I'm working with vertex, as they define the area of the shadow casting items:
2) Then, for each of these objects, I calculate the shadow they cast individually:
The shadow they cast is done by the CPU, by calpulating projections for each vertex of the body.
3) Then the GPU draws these shapes into a texture to compose the final shadow:
The problem I find is that making this difuse shadow effect, I need the final shadow. If I were to calculate the diffused shadows in step 2), a gap of light would appear between solids B and C.
But if I difuse the shadows in step 3, I no longer have the vertex information, as all the info I have are the 3 local textures added up together in one final texture.
So, is there anyway to achieve this? My first idea would be to pass a varying to the fragmentshader to calculate how much light comes into the dark area, but since I'll be processing this info on the final shadow, which has no vertex information, I'm completely lost about what approach I should use to make this.
I might be completely wrong on this approach, since I have very very limited experience with shaders.
Here is an example of what I have right now, and what I desire:
What I have: Plain illumination withing the light radius (which causes the light to clip through walls.
What I want: Shadow is more intense the farther away it is from where the light ends.
I think that no matter what you do, you're going to have to calculate the light falloff differently for the areas of the scene that don't have direct visibility to the original light source. I'm not sure how you're actually applying the shadow volume to the scene, but you might be able to do something like classify the edges as you're generating the shadow volume (i.e. did they come from a wall or from the line from the light to a corner?), and treat them accordingly.
In short I don't think there's any clever shader-specific trick you can pull off to fix this problem. It's a limitation of the algorithm you're using, so you need to improve the algorithm to take into account the different nature of the edges in your shadow volume.
--- Edit ---
Ok I think your best bet is going to be to create two shadow volumes (or rather shadow areas since we're working in 2D). One will be exactly as you have now, the other will be smaller -- you'll want to exclude the areas that are in "soft" shadow. Then you'll have three categories in your fragment shader: total shadow, soft shadow, and unshadowed.
To create the second shadow map, I think you'll want to do something like add a fixed angle to the edges that are created by the light shining around a corner.
--- Edit #2 ---
I think you can solve the problem of the gap between objects B and C by taking the silhouette of the shadow areas and the outer box of the scene. So for each of your shadow areas, you'd find the two outermost points that intersect the outer box, then throw out all the line segments in between and replace them with that portion of the box. I feel like I should be able to name that algorithm but it escapes me at the moment...
Original Scene:
Individual "hard" shadows:
Now union the shadows together:
Finally, trace around the shadows, keeping to the edges of the box. The corners you want to identify are the ones in yellow. As you are tracing around the perimeter of the shadows, you want to cut across the edge of the box until you reach the opposite corner. Not the easiest thing to code but if you've gotten this far I think you can figure it out :)
Alternatively, it might be easier to simply consider the lit areas. For example, you could do something like take the difference of the scene box and the unioned shadowed area This will usually give you multiple polygons; discard everything except the one that contains the light because the rest are the false gaps.
I have researched and the methods used to make a blooming effects are usually based on having a sharp and blurred image conjoined to give the glow effect. But I want to know how I can make gl_lines(or any line) have brightness. Since in my game I am randomly generating a simple 2D terrain, I wish to make the terrain line segments glow.
Use a fragment shader to calculate the distance from a fragment to the edge and color the fragment with the appropriate color value. You can use a simple control curve to control the radius and intensity anlong of the glow(like in photoshop). It can also be tuned to act like wireframe visualization. The idea is you don't really rasterize points to lines using a draw call, just shade each pixel based on its distance from the corresponding edge.
The difference from using a blur pass is that you will first get better performance, and second - per-pixel control over the glow, you can have non-uniform glow which you cannot get by using blur because it is not really aware of the actual line geometry, it just blindly works on pixels, whereas with edge distance detection you do use the actual geometry data as input without flatting it down to pixels. You can also have stuff like gradient glows, e.g. the glow color is different and changes with the radius.
Background:
I am creating a game that presents the world in an isometric perspective, achieved by drawing isometric tiles. My current implementation is naive, using the painter's method, drawing from back to front, from bottom to top, using surface blits from tile images.
The Problem:
I'm concerned (maybe unduly so, please let me know if this is the case) about overdraw. Here's a small snapshot of a single layer of tiles:
The areas hi-lit in pink are the areas where the back-to-front, bottom-to-top method blits pixels to the canvas more than once. This is a small and contrived example, but in practice I hope to accomplish something more along the lines of this:
(image credit eBoy)
With an image as complex as this, and a tile-based implementation, each screen pixel is drawn to several times before the final image is composited, which feels like it's really inefficient. Since these are just 2D images with, in the end, one-bit alpha masks, there aren't as many concerns as there would be with 3D (e.g. no wasted lighting or transform math) but it still seems there should be a more elegant way of determining whether a pixel should be drawn or not based on whether or not it would be occluded in the final composition.
Solutions?
The best solution I've come up with so far is to:
Reverse the drawing order and draw front-to-back, top-to-bottom.
Keep a single bit per pixel fake z buffer that records whether or not a pixel has been drawn yet.
Only draw a tile if some of the pixels it covers haven't been drawn yet.
Is there a better way to do this? Are blit operations superefficient and I'm tilting at windmills here?
Windmills. Especially if you're using OpenGL-accelerated SDL2 blits.