Bi-linear Interpolation for Radiosity - opengl

I have developed an OpenGL project using the old GL_QUADS rendering, not using Shaders.
I want to average the colours of mine radiosity solution. I have a number of patches per face. What I did:
I average the colours of the adjacent patches within a face. I got good results but still getting some mach band effect.
Ill try to explain what I did:
// ___________________
// |v3 v2|v3 v2|
// | | |
// | | |
// | 2 | 3 |
// | | |
// |v0_____v1|v0_____v1|
// |v3 v2|v3 v2|
// | | |
// | | |
// | 0 | 1 |
// | | |
// |v0_____v1|v0_____v1|
every patch has a colour. Patch 0, patch 1, patch 2 and patch 3. The vertices of that patch are the same of the patch colour. Then I change the vertices colours by averaging the colours with adjacent patches. So at first, I get the colour of patch 0 and 1, add them together then divide by 2, then I set this new colour to the vertex 1 of patch 0 and vertex 0 of patch 1.
However, I saw a paper where they get different results.
On this image, he sort of tried to explain how he got those values. But I didn't understand. He is doing something very similar from what I did, but I think he does get rid of match band effects.
I get results like this:
This is the results I get with my Radiosity rendering:
This is the results I get with my interpolation method:
It did got more smooth, but I still have huge mach band effects

I do not understand what the figure 8a is trying to accomplish, but your implementation sounds reasonable to me.
The artifacts you see are not a problem of you interpolation but due to the fact that you have a low-contrast color gradient on a flat surface.
If for example RGB color changes from (100,0,0) to (110,0,0) over 100 pixel, then every ten pixels you have change in color by 1 in the red channel. As your scene is very simple these edges extend over larger parts of the image. The human brain is very good at detecting them, so voila.
Probably the only way around would be to use a more complex scene, to use textures or to use a fragement-shader with some artificial small noise.

Related

Can't figure out the maths behind a quadratic curve needed for a slingshot

Would like to apologise if this is too maths based.
I have a project where I have to create an AngryBirds game with our teacher's custom game engine, however I am stuck on the maths behind the slingshot. We are not allowed to use any standard libraries. The top left is 0, 0 and the y-axis increases when you go down. The total width of the window is 1280 pixels and the height is 720 pixels. I am trying to make the bird travel further as you pull the bird further left from the sling origin which is 257, 524. I used the y value from release at the start so that the bird doesn't go somewhere else in the y-axis straight after letting go. Currently the bird increases in the y-axis, which is to be expected given that is exactly what my code does. I created a variable determining how far from the origin of the slingshot the bird is once the mouse has been let go and I would like to use this value in a speed calculation. I don't know what values to use in a quadratic formula to make the bird stay on the screen. I have tried to illustrate the window to make it clearer.
float y = getY() + getX()/10 * getX()/10 * (game_time.delta.count() / 10000.f);
setY(y);
//window illustration
------------------------------------------------------------------------------
| (0, 0) |
| |
| |
| o o |
| o |
| o o |
| |
|bird-> o\ / (257, 524) o |
| | |
|_________|______________________________________________________(1280, 720)_|
You have two problems:
lack of knowledge of elementary physics, related to an oblique shot.
window origin being in top-left corner implies left the coordinate system.
For the first part, I'd suggest you read some article about an oblique shot physics, like kinematics of projectile motion.
In brief:
divide the bird motion into horizontal and vertical parts:
horizontal part is the motion with constant speed
vertical motion is the motion influenced by the constant force
calculate horizontal and vertical components of velocity & position independently as a function of time
use calculated position to draw the "bird"
The second problem is easily solved by placing your coordinate system into the lower left part of the window, with y pointing up. This way you have a "right-hand" coordinate system that will be used for all calculations using equations found on the aforementioned link.
When you need to actually 'draw' the bird, use the following transformation for y coordinate:
y_draw = window_height - y_calculated;
Don't forget to add appropriate offsets for x and y to compensate for the fact that the origin for calculus is different from the position of the slingshot.

OpenGL: Count Number of Triangles / Objects drawn

For an Visual Studio developed OpenGL Game, is there any possibility to count the Triangles or Objects (meshes without connection to each other) that are drawn with the help of the API? I did not find any information on that. Counting them manually seems artificially painful.
While it isn't really a hassle to do it "yourself", you could override/redirect all glDraw*() calls.
I did something similar a few years ago, where I wanted to count the amount of draw calls any given frame made (for debug builds). I implemented it by having a macro for each glDraw*(). Which looked like:
int drawCallCount = 0;
#define glDrawArrays(...) \
do { glad_glDrawArrays(__VA_ARGS__); ++drawCallCount; } while (0)
// etc glDraw*()
Then at the end of each frame I would read drawCallCount and reset it back to 0.
Note that I was/am using GLAD.
Considering glDrawArrays(GLenum mode, GLint first, GLsizei count). Then you would check if mode is GL_TRIANGLES then (count - first) / 3 would be the amount of triangles that draw call made.
+-------------------+---------------------+
| Mode | Triangles |
+-------------------+---------------------+
| GL_TRIANGLES | (count - first) / 3 |
+-------------------+---------------------+
| GL_TRIANGLE_STRIP | (count - 2 - first) |
+-------------------+---------------------+
| GL_TRIANGLE_FAN | (count - 2 - first) |
+-------------------+---------------------+
I figured it out myself, and the result is exactly the same as in blender.
The drawn objects have of course a lot of meshes. The drawn mesh vertex count stored in a simple variable which set to zero and afterwards summed up and divided by three on each render loop iteration.

Position detection of a defined mark in a picture

I am still a beginner in coding. I am currently working on a program in C/C++ that is determining pixel position of a defined mark (which is a black circle with white surroundings) in a photo.
I made a mask from the mark and a vector, which contains mask's every pixel value as it's elements (using Magick++ I summed values for Red, Green and Blue). Vector contains aprox. 10 000 values since the mask is 100x100px. I also used threshold functions for simplifying the image.
Than I made a grid, that is doing the same for the picture, where I want to find the coordinates of the mark. It is basically a loop, that is going throught the image and when the program knows pixel values in the grid it immediately compares them with the mask. Main idea is to find lowest difference between the mask and one of the grid positions.
The problem is however that this procedure of evaluating all grids position takes huge amount of time (e.g. the image has 1920x1080px so more than 2 million vectors containing 10 000 values). I decided to cycle the grid not every pixel but for example every 10th column and row, and than for the best corellation from this procedure I selected area where I used every pixel loop. But, this still takes lot of time.
I would like to ask you, if there is some way of improving this method for better (faster) results or this whole idea is not time efficient and I should use different approach.
Thanks for every advice!
Edit: The program will be used for processing multiple images and on all of them the size will be same. This is the picture after threshold, the mark is the big black dot.
Image
The idea that I find interesting is a pyramidal scheme - or progressive refinement: you find the spot at a lower size image then search only a small rectangle in the larger image.
If you reduce your image by 2 in each dimension then you would reduce the time by 4 plus some search effort in the larger image.
This has some problems: the reduction will affect accuracy I expect. You might miss the spot.
You have to cut the sample (template) by the same so you create a half-size template in this case. As you half half half... the template will get blurred into the surrounding objects so it will not be possible to have a valid template; for half size once I guess the dot has a couple of pixels around it.
As you haven't specified a tool or OS, I will choose ImageMagick which is installed on most Linux distros and is available for OSX and Windows. I am just using it at the command-line here but there are C, C++, Python, Perl, PHP, Ruby, Java and .Net bindings available.
I would use a "Connect Components Analysis" or "Blob Analysis" like this:
convert image.png -negate \
-define connected-components:area-threshold=1200 \
-define connected-components:verbose=true \
-connected-components 8 -auto-level result.png
I have inverted your image with -negate because in morphological operations, the foreground is usually white rather than black. I have excluded blobs smaller than 1200 pixels because your circles seem to have a radius of 22 pixels which makes for an area of 1520 pixels (Pi * 22^2).
That gives this output, which means 7 blobs - one per line - with the bounding box and area of each:
Objects (id: bounding-box centroid area mean-color):
0: 1358x1032+0+0 640.8,517.0 1296947 gray(0)
3: 341x350+1017+287 1206.5,468.9 90143 gray(255)
106: 64x424+848+608 892.2,829.3 6854 gray(255)
95: 38x101+44+565 61.5,619.1 2619 gray(255)
49: 17x145+1341+379 1350.3,446.7 2063 gray(0)
64: 43x43+843+443 864.2,464.1 1451 gray(255)
86: 225x11+358+546 484.7,551.9 1379 gray(255)
Note that, as your circle is 42x42 pixels you will be looking for a blob that is square-ish and close to that size - so I am looking at the second to last line. I can draw that in in red on your original image like this:
convert image.png -fill none -stroke red -draw "rectangle 843,443 886,486" result.png
Also, note that as you are looking for a circle, you would expect the area to be pi * r^2 or around 1500 pixels and you can check that in the penultimate column of the output.
That runs in 0.4 seconds on a reasonable spec iMac. Note that you could divide the image into 4 and run each quarter in parallel to speed things up. So, if you do something like this:
#!/bin/bash
# Split image into 4 (maybe should allow 23 pixels overlap)
convert image.png -crop 1x4# tile-%02d.mpc
# Do Blob Analysis on 4 strips in parallel
for f in tile-*mpc; do
convert $f -negate \
-define connected-components:area-threshold=1200 \
-define connected-components:verbose=true \
-connected-components 8 info: &
done
# Wait for all 4 to finish
wait
That runs in around 0.14 seconds.

std::vector dimension, "tetris" shapes allowed?

Hi I have a question about the use of vectors in in c++, I am working on a problem of simulating particle movement through containers by random motion. I have a need for adding and removing particles as they meet or fail to meet certain criterion and for this purpose I found the vector class very handy, however I am new to c++ and a have a problem of efficiency I need to consider.
Are the 2D arrays I define limited to being either rectangles or squares? I only need to store the position of particles in each container. What I am afraid of is that my matrix will look like this:
| | | | |
| | | | |
| | | | |
| | | | |
for the 4x4 case. With the entry of the column being the position of the particles in each bin/container and number of particles differing from bin to bin I wonder if something like this is possible:
| | | | | 4 particles in first bin
| | | 2 particles in second bin, the memory occupied being 2x less than the one above
| | | | | | | | | | | | | | | | | this many in third bin and so on.
I will also be needing to remove elements in rows (reducing row size) or adding elements in rows (increasing row size) or in columns depending on which way I implement my algo and would appreciate it if you could warn me beforehand if there are common mistakes when dealing with vectors of multiple dimensions as I am sure to make one, being new to the programming language :)
You can use a vector of vectors: vector<vector<Particle> >
At first, when you ask about "2D arrays ... limited to ... rectangles or squares" it sounds like you are asking how to represent "jagged" arrays (arrays that are not rectangular, but have a fixed "height", with a variable "width" per-row).
But "tetris" shapes (tetraminos) don't lend themselves particularly to jagged arrays. It makes me think you actually want a sparse array. That is, you'd like to store only positions of particles, and not store positions of non-particles.
The easiest way to do this is to simply skip the grid, and directly maintain a list of positions of occupied spaces/particles.
struct Position
{
float X;
float Y;
};
// ...
std::vector<Position> particles; // std::list works too...
But plain lists aren't very efficient for some purposes. If you need to have spatially indexed access to these, for example to find out how many particles are in a given volume/area within your simulation, then you should use a space partitioning data structure that still allows sparse population.
People commonly do this the way you are describing, with a rectangular grid, then storing a list inside each grid location of the particles contained in that grid cell. But that "wastes space" for grid cells that aren't used. It doesn't solve the sparse population problem.
A popular data structure that supports both spatial indexing and sparse population is a quadtree.

Importing 3ds Max OBJ using C++

I have created a an OBJ loader that can import .OBJ files that were exported from 3DS Max into my simple OpenGL viewer / app. At the heart of this was a Vector3.h written with the help of some tutorials.
It worked great on a couple models I used, but the one I want to work with has something different that wasn't accounted for. It has 4 points in its vertices instead of 3. Here is a sample of a line I am working with:
g Box02
usemtl Wood_Bark
s 4
f 1/1/1 2/2/1 3/3/1 4/4/2
f 4/1/3 3/2/3 5/3/3
The first 'f' line has 4 vertices I am interested in. My Vertex3.h takes X, Y, Z. In the other models I had, all lines were like the second 'f' line, with only 3 elements. I am getting a vertex out of range, so when I went to check where it was happening, I saw it was on this line, so I assumed because there is more data on the line that can be handled. Here is the entire Vertex3.h
http://pastebin.com/dgGSBSFe
And this is the line of code that fails. vertices is a Vector3.
tempVertices.push_back ( vertices[--vertex] );
My question is, what is the 4th point? How would you account for that in something like my Vector3.h file? Seems like I need to create a Vector4.h, and ignore the 4th var if there is only 3 on the line. But I would like to know more about what I am dealing with, and any tips on how to do it. Is the 4th element an alpha or something? How should it be used, or should it be used at all in my calculations in Vector3.h?
A face with four points is called a quad. Usually if you want to render it you should break it up into two triangles.
So for example, you have this:
___
| |
| |
|___|
You need to turn it into this:
___
|\ |
| \ |
|__\|
Assuming the vertices go counter-clockwise (the default in OpenGL), starting from the upper left, you could make two triangles. The first triangle's vertices would be the quad's first, second, and third vertices. The second triangle's vertices would be the quad's third, fourth, and first vertices.
Why don't you export as triangles only? Convert to EMesh in 3ds max, make all edges visible, export. Or simply use the appropriate OBJ export option.