I have been trying to solve this problem for several days now, and I am official stuck; I need to draw the topological plot of an eeg signal on the brain, and I didn't find any cpp libraries that already do so. There is such library in Matlab, but that is considered a last resort, for now it is prefered to do all the processing in c++.
Basically what I need is a way to interpolate the color points in image 1 in order to produce image 2. They belong to different eeg diagrams, which is why they do not match.
My question is: is there any commonly known algorithm that will allow me to interpolate the points in image 1 in order to produce image 2?
I like the "Irregular grid (scattered data)" methods suggested by #Pavel in a comment.
To implement a simple but fast rendering solution where each output color is based on only three source colors, you could do a Delaunay triangulation and then use Gouraud shading to render the triangles using the known vertex colors.
Your sample image 2 is "softer" than that so I suspect it uses a higher-order interpolation scheme.
Since the interpolation method influences interpretation of the data be careful to select one that reduces incorrect interpretations.
Related
I have a function for generating a 3d-matrix with grey values (char values from 0 to 255). Now I want to generate a 3d-object out of this matrix, e.g. I want to display these values as a 3d-object (in cpp). What is the best way to do that platform-independent and as fast as possible?
I have already read a bit about using OGL, but then I run in the following problem: The matrix can contain up to $4\cdot10^9$ values. When I want to load the complete matrix into the RAM, it will collapse. So a direct draw from the matrix is impossible. Furthermore I only found functions for drawing 2d-images in OGL. Is there a way to draw 3d-pixels in OGL? Or should I rather use another approach?
I do not need a moving functionality (at least not at the moment), I just want to display the data.
Edit 2: For narrowing the question in: Is there a way to draw pixels in 3d-space with OGL taken from a 3d-matrix? I did not find a suitable function, I only found 2d-functions.
What you're looking to do is called volume rendering. There are various techniques to achieve it, and ultimately it depends on what you want it to look like.
There is no simple way to do this either. You can't just draw 3d pixels. You can draw using GL_POINTS and have each transformed point raster to 1 pixel, but this is probably completely unsatisfactory for you because it will only draw a some pixels to the screen (you wont see anything on big resolutions).
A general solution would be to just render a cube using normal triangles, for each point. Sort it back to front if you need alpha blending. If you want a more specific answer you will need to narrow your request. Ray tracing also has merits in volume rendering. Learn more on volume rendering.
Finding Circle Edges :
Here are the two sample images that i have posted.
Need to find the edges of the circle:
Does it possible to develop one generic circle algorithm,that could find all possible circles in all scenarios ?? Like below
1. Circle may in different color ( White , Black , Gray , Red)
2. Background color may be different
3. Different in its size
http://postimage.org/image/tddhvs8c5/
http://postimage.org/image/8kdxqiiyb/
Please suggest some idea to write a algorithm that should work out on above circle
Sounds like a job for the Hough circle transform:
I have not used it myself so far, but it is included in OpenCV. Among other parameters, you can give it a minimum and maximum radius.
Here are links to documentation and a tutorial.
I'd imagine your second example picture will be very hard to detect though
You could apply an edge detection transformation to both images.
Here is what I did in Paint.NET using the outline effect:
You could test edge detect too but that requires more contrast in the images.
Another thing to take into consideration is what it exactly is that you want to detect; in the first image, do you want to detect the white ring or the disc inside. In the second image; do you want to detect the all the circles (there are many tiny ones) or just the big one(s). These requirement will influence what transformation to use and how to initialize these.
After transforming the images into versions that 'highlight' the circles you'll need an algorithm to find them.
Again, there are more options than just one. Here is a paper describing an algoritm
Searching the web for image processing circle recognition gives lots of results.
I think you will have to use a couple of different feature calculations that can be used for segmentation. I the first picture the circle is recognizeable by intensity alone so that one is easy. In the second picture it is mostly the texture that differentiates the circle edge, in that case a feature image based based on some kind of texture filter will be needed, calculating the local variance for instance will result in a scalar image that can segment out the circle. If there are other features that defines the circle in other scenarios (different colors for background foreground etc) you might need other explicit filters that give a scalar difference for those cases.
When you have scalar images where the circles stand out you can use the circular Hough transform to find the circle. Either run it for different circle sizes or modify it to detect a range of sizes.
If you know that there will be only one circle and you know the kind of noise that will be present (vertical/horizontal lines etc) an alternative approach is to design a more specific algorithm e.g. filter out the noise and find center of gravity etc.
Answer to comment:
The idea is to separate the algorithm into independent stages. I do not know how the specific algorithm you have works but presumably it could take a binary or grayscale image where high values means pixel part of circle and low values pixel not part of circle, the present algorithm also needs to give some kind of confidence value on the circle it finds. This present algorithm would then represent some stage(s) at the end of the complete algorithm. You will then have to add the first stage which is to generate feature images for all kind of input you want to handle. For the two examples it should suffice with one intensity image (simply grayscale) and one image where each pixel represents the local variance. In the color case do a color transform an use the hue value perhaps? For every input feed all feature images to the later stage, use the confidence value to select the most likely candidate. If you have other unknowns that your algorithm need as input parameters (circle size etc) just iterate over the possible values and make sure your later stages returns confidence values.
I want to create a 2D game with monsters build as a custom vertex mesh and a texture map. I want to use this mesh to provide smooth vector animations. I'm using opengl es 2.0.
For now the best idea i have is to write a simple editor, where i can create a mesh and make key-frame based animation by changing position of each vertex and specifying the key-frames interpolation technics ( linear, quadric and so on).
I also have some understanding of bone animation (and skin based on bones), but i'm not sure i will be able to provide a good skeletons for my monsters.
I'm not sure it is a good way to go. Can you suggest some better ideas and / or editors, libraries for such mesh animations ?
PS: i'm using C++ now and so c++ libraries are the most welcome
You said this is a 2D game, so I'm going to assume your characters are flat polygons on to which you apply a texture map. Please add more detail to your question if this is not the case.
As far as the C++ part I think the same principles used for 3D blend shape animation can be applied to this case. For each character you will have a list of possible 'morph targets' or poses, each being a different polygon shape with same number of vertices. The character's AI will determine when to change from one to another, and how long a transition takes. So at any given point time your character can be either at a fixed state, matching one of your morph targets, or it can be in a transition state between two poses. The first has no trouble, the second case is handled by interpolating the vertices of the two polygons one by one to arrive to a morphed polygon. You can start with linear interpolation and see if that is sufficient, I suspect you may want to at least apply an easing function to the start and end of the transitions, maybe the smoothstep function.
As far as authoring these characters, have you considered using Blender? You can design and test your characters entirely within this package, then export the meshes as .obj files that you can easily import into your game.
Does anyone know a good algorithm for converting a vector path into a stroked path that is composed of triangle/quad faces? Ideally with round line joins.
Basically I am trying to draw a thick path that whose colour is based upon a value that varies with the distance along the path. I'm thinking that converting the path to triangles/quads and texture mapping it by providing the distance along the path as a 1d texture coordinate that can then be used to retrieve the colours at the corners of the triangles and interpolate.
Any other suggestions on how to do this that won't look terrible and can be anti-aliased would be appreciated.
I'm using AGG for rendering, currently, but I could maybe use an alternative provided it doesn't have too many dependencies. I guess the back-end used for rendering doesn't really matter. Whilst AGG can stroke paths, the VertexSource interface does not allow for additional vertex information other than the x/y coordinates. Additionally getting my colour mapping into the rasterizer doesn't look feasible when using the normal conv_stroke.
Here's another great resource for understanding the mechanics of stroking a path.
For anyone looking for a solution to this, I found this useful:
https://keithp.com/~keithp/talks/cairo2003.pdf
So you can effectively convolve a regular polygon with the line to generate the mesh. Requires a slightly more complicated algorithm than outlined in the pdf in order to output triangles, but it's not actually too difficult to extend it.
You can also write a custom span generator for AGG along the lines of agg::span_gouraud_rgba but one that effectively does texture mapping instead.
From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/