so https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glBlendFunc.xhtml has one value that can only be used as source argument, GL_SRC_ALPHA_SATURATE and it is intended to be used as glBlendFunc(GL_SRC_ALPHA_SATURATE, GL_ONE) together with polygon antialiasing (GL_POLYGON_SMOOTH) on front-to-back sorted geometry and if I'm not mistaken it ensures that triangle sides that directly connect to another triangle will not get visible lines between them.
The documentation also states that
Destination alpha bitplanes, which must be present for this blend function to operate correctly, store the accumulated coverage.
So, while I know more or less what this functionality is used for, I'd really like to understand how it works exactly and how the alpha bitplanes are used (and does this mean that the factor depends on both source alpha and dest alpha?). Since this isn't really documented, I hope someone here (looking at #Nicol Bolas) can shed light on the math or implementation magic behind it?
And then one step further - is there any other usecase GL_SRC_ALPHA_SATURATE can be used for?
Related
I needed to make an object in my game transparent, but it wasn't working properly. So, after some research, I later found out how to properly do alpha blending in Direct3D9 and implemented some code to make the object finally transparent. However, while I have a basic idea of it, I am still a bit confused on how it all works. I have done lots of research, but I have only found very vague answers which just left me more confused. So, what do these two lines of code really mean and do?
d3ddev->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
d3ddev->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
Alpha blending in D3D is done according to this equation
Color = TexelColor x SourceBlend + CurrentPixelColor x DestBlend
The setting you quote sets the "SourceBlend" to the alpha value of the source texture. The value of DestBlend is set to one minus the alpha value of the source texture. This is a classic post-blending setting. This kind of mode is used to "anti-alias" an image, for example to blur the edges of a circle so that they do not look pixelated. It is quite good for things like smoke effects or for semi-transparent objects like plastics.
Why do have to specify it? Well if you set the DestBlend to always be one, you get a kind of "ghosting" effect, like when you partially see a reflection in a pane of glass. This effect is particularly good when used with an environment map. Pre-blending can also be useful, but requires you change the format of your inputs.
I want to locate a service robot via infrared landmarks. The idea is to detect two landmarks, get the distance to the landmarks and calculate the robots position from these informations (the position of the landmarks are known).
For this I have built an artificial 2x3 matrix of IR LEDs, which are visible in the robots infrared camera image (shown in the image below).
As the first step, I want to detect a single landmark in a picture and get it's x-y coordinates. I can use these coordinates in the future to get the distance from the depth-image provided.
My first approach was to convert the image to a black and white image. Then I tried to filter out different cluster of points (which i dilated and contoured in the first place). I couldn't succeed with this method.
Now I wonder if there are any pattern recognition/computer vision methods which can help me to quite "easily" detect the pattern.
I've added a picture of the infrared image with the landmark in it and a converted black/white image.
a) Which method can help me to solve this problem?
b) Should I use a 3x3 Matrix or any other geometric form instead of the 2x3 Matrix ?
IR-Image
Black-White Image
A direct answer:
1) find all small circles in the image; 2) look among these small circles for ones that are the same size and close together, and, say, form parallel lines.
The reason for this approach is that you have coded the robot with a specific pattern of small objects. Therefore, look for the objects and then look for the pattern. (If the orientation and size wouldn't change, then you could just look for a sub-image within the larger image, but because it can, you need to look for elements of the pattern that remain consistent with motion in the 3D space, that is, the parallel lines.)
This will work in the example images, but to know whether this will work more generally, we need to know more than you told us: It depends on whether the variation in the images of the matrix and the variations in the background will let this be enough to distinguish between them. If not, maybe you need a more clever algorithm or maybe a different pattern of lights. In the extreme case, it's obvious that if you had another 2x3 matric around, it's not enough. It all depends on the variation of the object to be identified and the variations within the background scene, and because you don't tell us either of these things, it's hard to say the best way, what's good enough, what's a better way, etc.
If you have the choice, and here it sound like you do, good data is better than clever analysis. For this problem, I'd call good data to be anything that clearly distinguishes the object from the background. You need to think of it this way, and look at what the background is, and all the different perspectives on the lights that are possible, and make sure these can never be confused.
For example, if you have a lot of control over this, and enough time, temporal variations are often the easiest. Turning the lights (or a subset of the lights) on and off, etc, and then looking for the expected temporal variation is often the surest way to distinguish signal from noise — but really, this again is just making an assumption about the background and foreground (ie, that the background won't vary with some particular time pattern).
I have been using OpenGL for a while now and continue to stay positive about making progress. However, I now have an issue that I have been unable to solve and it's taking a while. So, the issue is that I would like to:
Create points on screen sequentially (to appear every second for example)
Move these points independently
So far I have 2 methods on paper and that is to upload all vertices to a VBO and make each point visible (draw). The other method I had in mind was to create an empty VBO (set to NULL) and upload data per point.
Note, I want to transform these points independent of each other - can a uniform still be used? If so how can I set this up to draw point - transform - draw point - transform.
If I'm going about this completely wrong or there is a better, more improved method then please say so.
Many thanks!
I was unable to find literature on this.
The question is that given some photograph with a well known object within it - say something that was printed for this purpose, how well does the approach work to use that object to infer lighting conditions as a method of color profile calibration.
For instance, say we print out the peace flag rainbow and then take a photo of it in various lighting conditions with a consumer-grade flagship smartphone camera (say, iphone 6, nexus 6) the underlying question is whether using known references within the image is a potentially good technique in calibrating the colors throughout the image
There's of course a number of issues regarding variance of lighting conditions in different regions of the photograph along with what wavelengths the device is capable from differentiating in even the best circumstances --- but let's set them aside.
Has anyone worked with this technique or seen literature regarding it, and if so, can you point me in the direction of some findings.
Thanks.
I am not sure if this is a standard technique, however one simple way to calibrate your color channels would be to learn a regression model (for each pixel) between the colors that are present in the region and their actual colors. If you have some shots of known images, you should have sufficient data to learn the transformation model using a neural network (or a simpler model like linear regression if you like, but a NN would be able to capture multi-modal mappings). You can even do a patch based regression using a NN on small patches (say 8x8, or 16x16) if you need to learn some spatial dependencies between intensities.
This should be possible, but you should pay attention to the way your known object reacts to light. Ideally it should be non-glossy, have identical colours when pictured from an angle, be totally non-transparent, and reflect all wavelengths outside the visible spectrum to which your sensor is sensitive (IR, UV, no filter is perfect) uniformly across all different coloured regions. Emphasis added because this last one is very important and very hard to get right.
However, the main issue you have with a coloured known object is: What are the actual colours of the different regions in RGB(*)? So in this way you can determine the effect of different lighting conditions between each other, but never relative to some ground truth.
The solution: use a uniformly white, non-reflective, intransparant surface: A sufficiently thick sheet of white paper should do just fine. Take a non-overexposed photograph of the sheet in your scene, and you know:
R, G and B should be close to equal
R, G and B should be nearly 255.
From those two facts and the R, G and B values you actually get from the sheet you can determine any shift in colour and brightness in your scene. Assume that black is still black (usually a reasonable assumption) and use linear interpolation to determine the shift experienced by pixels coloured somewhere between 0 and 255 on any of the axed.
(*) or other colourspace of your choice.
I have a function for generating a 3d-matrix with grey values (char values from 0 to 255). Now I want to generate a 3d-object out of this matrix, e.g. I want to display these values as a 3d-object (in cpp). What is the best way to do that platform-independent and as fast as possible?
I have already read a bit about using OGL, but then I run in the following problem: The matrix can contain up to $4\cdot10^9$ values. When I want to load the complete matrix into the RAM, it will collapse. So a direct draw from the matrix is impossible. Furthermore I only found functions for drawing 2d-images in OGL. Is there a way to draw 3d-pixels in OGL? Or should I rather use another approach?
I do not need a moving functionality (at least not at the moment), I just want to display the data.
Edit 2: For narrowing the question in: Is there a way to draw pixels in 3d-space with OGL taken from a 3d-matrix? I did not find a suitable function, I only found 2d-functions.
What you're looking to do is called volume rendering. There are various techniques to achieve it, and ultimately it depends on what you want it to look like.
There is no simple way to do this either. You can't just draw 3d pixels. You can draw using GL_POINTS and have each transformed point raster to 1 pixel, but this is probably completely unsatisfactory for you because it will only draw a some pixels to the screen (you wont see anything on big resolutions).
A general solution would be to just render a cube using normal triangles, for each point. Sort it back to front if you need alpha blending. If you want a more specific answer you will need to narrow your request. Ray tracing also has merits in volume rendering. Learn more on volume rendering.