Vertices for a 20 sided DnD Dice - opengl

Does anyone know or know how I might get the vertices for a 20 sided dice? I believe it is also known as a regular icosahedron. I believe they have 12 vertices. I don't mind the scale, although it would be nice if they were centered such that the center of the dice is the origin. I'm thinking of something like
{x,y,z},
{x,y,z},
{x,y,z},
{x,y,z},
{x,y,z},
{x,y,z},
{x,y,z},
{x,y,z},
{x,y,z},
{x,y,z},
{x,y,z},
{x,y,z},
I'm planning on rendering it with OpenGL. It doesn't have to include face data in triangle strip form, but that would be quite a bonus.
Also bonus question: does anyone know if the vertices are equal distance from the center? Meaning they can all be normalized to 1? I suspect they are. If so, a normalized format would be great, but not necessary...I'd be happy with whatever data I can get.

I can answer partially. Yes, they are all the same distance from the center. This is true of all platonic solids. The vertices are given here: https://en.m.wikipedia.org/wiki/Platonic_solid and https://en.m.wikipedia.org/wiki/Regular_icosahedron

Related

Get all the image pixels with certain pixel values with K-nearest neighbor

I want to obtain all the pixels in an image with pixel values closest to certain pixels in an image. For example, I have an image which has a view of ocean (deep blue), clear sky (light blue), beach, and houses. I want to find all the pixels that are closest to deep blue in order to classify it as water. My problem is sky also gets classified as water. Someone suggested to use K nearest neighbor algorithm, but there are few examples online that use old C style. Can anyone provide me example on K-NN using OpenCv C++?
"Classify it as water" and "obtain all the pixels in an image with pixel values closest to certain pixels in an image" are not the same task. Color properties is not enough for classification you described. You will always have a number of same colored points on water and sky. So you have to use more detailed analysis. For instance if you know your object is self-connected you can use something like water-shred to fill this region and ignore distant and not connected regions in sky of the same color as water (suppose you will successfully detect by edge-detector horizon-line which split water and sky).
Also you can use more information about object you want to select like structure: calculate its entropy etc. Then you can use also K-nearest neighbor algorithm in multi-dimensional space where 1st 3 dimensions is color, 4th - entropy etc. But you can also simply check every image pixel if it is in epsilon-neighborhood of selected pixels area (I mean color-entropy 4D-space, 3 dimension from color + 1 dimension from entropy) using simple Euclidean metric -- it is pretty fast and could be accelerated by GPU .

C++/OpenCV find objects on an image

I have to find a couple of objects on an image. For example find all black pawns on a chessboard:
How can I achieve that,using OpenCV ?
I think about cv::matchTemplate, however I'm not sure how would it proceed with different pawn backgrounds. I'm also not sure if I can easily get all matchings in that way.
Start with corner detection (well known shi tomasi method, or smt like line detection and intersection, since it should work better for your case) and collection of 64 subsamples of image -the squares. If the board is ideal - pure birds eye view -and you know the size (8x8 here), then just crop it into WxH pieces. You should save these samples with their coordinates (b6, h1 etc).
For every square, a low pass filter smt like gaussian, then otsu threshold and contour detection should give you at most one big contour. If there is none, that square is empty.
You can try to get the contours from the initial state of the board, and name them. This is your training data. Since pieces are not gonna differ much; 1 sample is enough :) Save a "white pawn"s (any square from 2nd row initially) area, hu moments and color (mean rgb value is OK). Then save a "black pawn". Then "white queen" and "black queen" (d4 d8). Do that area, moment, color table for all pieces.
Later, for any state of the board, you can try to match hu moments, color and area of your contour -output of those squares, using your identification table. Of course some statistical method like knn could help you there. You can also utilize matchShapes method.
At last you identify your contour smt like black knight, red checker piece, etc.

Ball detection with OpenCV

I need to use a low-resolution (320 x 240) image in OpenCV and find a large exercise ball, either blue or red. The ball is 25 inches wide and is NOT guaranteed to be perfectly circular. I tried to use HoughCircles with a Canny-thresholded image. I had no success. What am I doing wrong and what is the best way to get the size of the ball in pixels and where it is? It'll let me calculate things like how far it is from the camera!
Let me collect all the other advice in one answer:
Use cv::inRange() to get the correct color (More Information on that). You might want to use something like RGB Normalization beforehand to make sure you retreive the complete ball.
Now you have all the pixel that relate to the ball (and maybe some noise that you have to get rid of). Retrieve the pixels that are farthest apart left/right and top/bottom in your ball (aka your connected Component that has a plausible size) to get the size of the ball. (If the ball doesn't have to be round you probably want to take the bigger value)
Compute the distance from the camera with the now known size of the ball in the picture. (You have to know the "real" size beforehand for this computation obviously ;))
There obviously are other ways (f.e. use edge detection), but this is imo the easiest.
It is easier to give you an answer if you post an example picture.

Sprite radial interpolation to each degree from 45 degree sprites

Question on how this could be done, if possible.
I have sprites in each of the following directions (up, down, left, right, upright, upleft, downright, and downleft). I am making a similar game to the old school zelda, running around a tile map (using tiled editor). This is working well, until now, I want to be able to shoot arrows/spells at any location on the map. I can do so, but the graphics look horrible because my guy will only turn each 45 degrees.
I have corrected this so I can only shoot in the direction my guy is facing, but now I can't hit them if they are not at a 45 degree angle from me. To fix this, I need to have a sprite at every 1 degree, or somehow combine the images say at 0 degrees (up) and 45 degrees (upright) to be able to get say 10 degrees via interpolation. Is this possible? Any ideas on how to do this?
I am looking into working with key animations since I wouldn't have to have so many sprites and use much less video memory (and smoother animations), but I still run into this same problem. Would like to know if this is conceptually possible and if so, a little psuedo code or snipit would be much appreciated.
One other question, if this is possible, do I need to be rendering this via openGL in 3D? Didn't really know if 3d would help in a 2d (orthogonal tile) game, but it might help spells falling look like they are falling downward more than moving across tiles from above to below?

Cubemap from panoramic horizontally wrappable image

I'm trying to write an algorithm to generate the "ceiling panel" from a horiontally wrappable panoramic image like the one above. Images 1 to 4 are a straight cut out for the walls of the cube but the ceiling will be more complicated as I assume it needs to be composited from parts 5a to 5d. Does anyone know the solution in pseudocode?
my guess is that we need to iterate over the coordinates of the ceiling tile
i.e.
for y=0 to height
for x=0 to width
colorofsomecoordinateonoriginalimage = some function (poloar coords?)
set pixel(x,y) = colorofsomecoordinateonoriginalimage
next
next
Hum... I remember doing something like that for computer vision class one time back in grad school. It's not impossible but a LOT of work needs to be done. One way would be to degrade the entire product's quality. That's the easiest starting point. Once you degraded it enough (depending on how much you need to stretch the edges), you can start applying nonlinear transformations to the image. This is probably best done approximating by maybe cutting out sections of the cylinder by degrees and then applying one of the age old projections used in making flat maps (like Mercator or CADRG or something)... but you have to remember to interpolate the pixels, make sure you at least do an averaging of the pixels to approximate. That's the best I can think of.
You can't generate a panorama just by taking photos from a single location and stitch them. Well, you can for a single horizontal set, but it would look ugly (usually, you stitch many more than 4 photos to avoid distortions at the edges).
Here, you have even more data in the y-direction, which means even more pictures, and some sort fancy projection to generate the final image.
If you look at the panorama you have closely, you'll notice that the boundary of the region in sunlight is not straight. That is because your panorama was projected on a cylinder, not a cube. So I don't think 1/2/3/4 would look right directly mapped to a cube.
Bottom line, you really can't consider those 8 chunks as 8 pictures taken from a fixed point (If you need convincing, try yourself to take 8 pictures like that and try to stitch them together. You'll see how fun it is for the upper row, and even though it is easy for the bottom row, how ugly it looks on the stitched regions).
Now, why you need cube maps changes drastically what your options are. If you're only looking for a cube map to do cheap environment mapping effects, then the simplest is to find an arbitrary function that maps the edges where you want them to be, and simply linearly interpolate in between. It's completely the wrong projection, but ought to give a picture that looks good enough for the intended goal.
If you're looking for something more accurate, then you need to know how the projection was generated, so that you can unproject it before re-projecting it on the cube.
All that said, it's also a lot easier to just photograph cube maps rather than process a panorama to generate them, but that might not be possible for you.