I got a project to draw a hole(transparent cube) on a wall, but the hole could be any shape and any size, and maybe out of the wall. And I'm tried to use blend to override the existing wall, absolutely, this could not work correctly.
In the project, the walls are drawn with instancing method DrawArraysInstanced, and the transparent cube is drawn with instancing method (this isn't essential, there should be DrawArrays in the future), too. And I tried to use StencilFunc and DepthMask, but could not figure out how to do it.
Briefly, Is there an approach to draw rectangles by instancing and then cut small rectangles on some rectangles (I mean some instanced rectangle have a window)? Noticed that the shape that cut rectangles could be any size and type.
Any advice and suggestions would be greatly appreciated, and If the question is not clear, please let me know.
Edit:
I know the practical way is to calculate the shape intersection to get the triangles and using DrawArraysInstanced to draw the triangles with those object that cutted holes, but I just want to know If there is a tricky way to make it work.
Related
I'm trying to add a skybox to the world/camera/game and I don't know how to go about it. If someone could give me some guidance on how to apply it, it would be much appreciated.
I have already loaded the skybox, I just don't know how to draw it properly so it will fit around the camera as it moves.
I have managed to texture a sort of cube, which might be close to a skybox but then it's only visible from the outside. Once you enter the cube, you can't see it from the inside. Perhaps if I could invert the cube's faces, it will show when I'm inside the cube and I can make it larger?
From outside the cube looking at it
From inside looking out
I had a similar problem a few weeks back, if you are looking for some pseudo code I think I may be able to help. First of all using a cube isn't the best idea when rendering as your box won't look natural, map it to a sphere for a smooth effect.
Create a bounding sphere around your viewer that moves relative to your camera
Apply the texture on that sphere, this will give the impression that the sky is moving relative to you
When you are drawing, disable your z-buffer and frustum (assuming you're using any culling algorithm) this will allow the sky box to be drawn but will ensure terrain is drawn over the top of the sky box when depth sort algorithms are performed by OpenGL.
Note: Don't forget to re-enable the z-buffer after the sky box has been drawn, otherwise your terrain elements will appear outside of the sphere, meaning you will only see the Sky box.
I recently wrote a basic terrain engine in DirectX but the principals are fairly similar, if you'd like to view the repo you can find it here
Check out line 286 in this file to see how the Skybox is rendered, then also visit the SkyBox implementation file to see how it is constructed, and the SkyShader implementation file to see how the texture is mapped to the sphere, the main method to be concerned with in the shader file is SetShaderParameters()
In terms of moving the skybox relative to your camera, simply set the WVP matrix of your skybox to that of your camera, and then tweak the x, y, z planes of the skybox to your liking.
Extra If you are going to implement multi-player aspects, just disable back-face rendering for the sphere, then each player can see their SkyBox but opponents cannot. Alternatively you create one large sphere around the world
Hope that helps - if you need anymore help just ask, I know this stuff can be fairly dense at first:)
Background:
I am creating a game that presents the world in an isometric perspective, achieved by drawing isometric tiles. My current implementation is naive, using the painter's method, drawing from back to front, from bottom to top, using surface blits from tile images.
The Problem:
I'm concerned (maybe unduly so, please let me know if this is the case) about overdraw. Here's a small snapshot of a single layer of tiles:
The areas hi-lit in pink are the areas where the back-to-front, bottom-to-top method blits pixels to the canvas more than once. This is a small and contrived example, but in practice I hope to accomplish something more along the lines of this:
(image credit eBoy)
With an image as complex as this, and a tile-based implementation, each screen pixel is drawn to several times before the final image is composited, which feels like it's really inefficient. Since these are just 2D images with, in the end, one-bit alpha masks, there aren't as many concerns as there would be with 3D (e.g. no wasted lighting or transform math) but it still seems there should be a more elegant way of determining whether a pixel should be drawn or not based on whether or not it would be occluded in the final composition.
Solutions?
The best solution I've come up with so far is to:
Reverse the drawing order and draw front-to-back, top-to-bottom.
Keep a single bit per pixel fake z buffer that records whether or not a pixel has been drawn yet.
Only draw a tile if some of the pixels it covers haven't been drawn yet.
Is there a better way to do this? Are blit operations superefficient and I'm tilting at windmills here?
Windmills. Especially if you're using OpenGL-accelerated SDL2 blits.
I'm making a 2D game that uses directx. Currently, I have a background texture (with more to come) that I draw to the screen. However, I only want a portion of the texture drawn to the screen. I know that I could use a rectangle with the draw function, but I need a greater degree of control. Is there a way to draw several triangles (using custom vertices) to the screen from my drawing? I've looked around the internet and this site, but I just can't seem to find what I want. I can give more information/code if needed. Thank you!
i would like to create a light effect on a 2d car racing written in SDL.NET (and c#).
The psychs Light effect is simple: the car headlights (classic conic light effect).
Does somebody know where can i look for some example of light managemnt via SDL ? Or maybe tell me how to solve this issue ?
Thank you for your support !
Update: actually i've created an image with gimp with a simulation of light.
Then i load it in front of my car sprite to simulate the light.
But i don't like this type of approach... maybe is more efficient than a run-time generation/simulation of a light!
If you're looking at pure 2D solutions, you just want to attach the headlights sprite to your car sprite. There is no "light management" here. Just an alpha-blended sprite.
To improve the effect, you might want to create and use two sprites actually:
one small, directed for the conic headlight effect
one much bigger, haloish, to increase lighting in front of the car on a large area.
Note: you might do the second without images, if you can create an alpha-blended primitive in SDL of the proper shape.
If you need a realistic lighting model you have to change to opengl or directx and use a shader like deferred lighting. This is an example for xna.
How about using multiple images instead?
Since SDL doesn't have shader effects, I would suggest breaking the conical image into small parts depending on the detail you want, and collision checking with the objects in front of the image and drawing only the parts required.
It's a hack, but it can look good if you divide the "glow" images both vertically and horizontally.
Circles are one of the basics geometric entities. Yet there is no primitives defined in OpenGL for this, like lines or polygons. Why so? It's a little annoying to include custom headers for this all the time!
Any specific reason to omit it?
While circles may be basic shapes they aren't as basic as points, lines or triangles when it comes to rasterisation. The first graphic cards with 3D acceleration were designed to do one thing very well, rasterise triangles (and lines and points because they were trivial to add). Adding any more complex shapes would have made the card a lot more expensive while adding only little functionality.
But there's another reason for not including circles/ellipses. They don't connect. You can't build a 3D model out of them and you can't connect triangles to them without adding gaps or overlapping parts. So for circles to be useful you also need other shapes like curves and other more advanced surfaces (e.g. NURBS). Circles alone are only useful as "big points" which can also be done with a quad and a circle shaped texture, or triangles.
If you are using "custom headers" for circles you should be aware that those probably create a triangle model that form your "circles".
Because historically, video cards have rendered points, lines, and triangles.
You calculate curves using short enough lines so the video card doesn't have to.
Because graphic cards operate on 3-dimensional points, lines and triangles. A circle requires curves or splines. It cannot be perfectly represented by a "normal" 3D primitive, only approximated as an N-gon (so it will look like a circle at a certain distance). If you want a circle, write the routine yourself (it isn't hard to do). Either draw it as an N-gon, or make a square (2 triangles) and cut a circle out of it it using fragment shader (you can get a perfect circle this way).
You could always use gluSphere (if a three-dimensional shape is what you're looking for).
If you want to draw a two-dimensional circle you're stuck with custom methods. I'd go with a triangle fan.
The primitives are called primitives for a reason :)