Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
As far as I can see from OpenGL draw rectangle outline - given a proper array of vertices, GL_LINE_LOOP should draw a square.
So, I'm trying this Wavefront .obj file:
v 0.0 0.0 0.0
v 1.0 0.0 0.0
v 1.0 1.0 0.0
v 0.0 1.0 0.0
g myPlane
f 1 2 3 4
... and I would have expected x,y of (0,0) -> (1,0) -> (1,1) -> (0,1) would have provided a square. However, I'm trying this in a program which is a reduced version of the https://github.com/julianstorer/JUCE/blob/master/examples/Demo/Source/Demos/OpenGLDemo.cpp ... there it is used:
attributes.enable (openGLContext);
glDrawElements (GL_LINE_LOOP, vertexBuffer.numIndices, GL_UNSIGNED_INT, 0); //GL_TRIANGLES
attributes.disable (openGLContext);
... as C++ drawing code, and the output for the above .obj file is:
... that is - there is a diagonal line, and I have no idea how it is possible for it to end up there, if I use GL_LINE_LOOP? (there are images like this one that show that GL_LINE_LOOP should not draw a diagonal for this sequence of vertices)? So why do I get a diagonal - what could be the problem causing it, and how can I get rid of it?
Thanks to the comment by #genpfault, found that it is indeed the parser that decomposes into triangles; the parser is WavefrontObjParser.h, and it contains, among other things:
struct Face
{
Face (String::CharPointerType t)
{
while (! t.isEmpty())
triples.add (parseTriple (t));
}
Array<TripleIndex> triples;
...
... which, I guess, indicates that the original mesh is split into a new mesh...
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want implement a function in C++/RealBasic to create a color gradient by the parameters:
Width and height of the image
2 colors of the gradient
Angle (direction) of the gradient
Strength of the gradient
The following links show some examples of the desired output image:
http://www.artima.com/articles/linear_gradients_in_flex_4.html, http://i.stack.imgur.com/4ssfj.png
I have found multiple examples but they give me only vertical and horizontal gradients, while I want to specify the angle and strength too.
Can someone help me please?
P.S.: I know only a little about geometry!! :(
Your question is very wide and as is, this is a pretty complex exercise with a lot of code, including image rendering, image format handling, writing file to disk, etc. These are not the matter of a single function. Because of this, I focus on making an arbitrary linear color gradient of 2 colors.
Linear color gradient
You can create a linear color "gradient" by linear interpolation between 2 colors. However simple linear interpolation makes really harsh-looking transitions. For visually more appealing results I recommend to use some kind of S-shaped interpolation curve like the Hermite interpolation based smoothstep.
Regarding the angle, you can define a line segment by the start (p0) and end (p1) points of the color gradient. Let's call the distance between them d01, so d01 = distance(p0, p1). Then for each pixel point p of the image, you have to compute the closest point p2 on this segment. Here is an example how to do that. Then compute t = distance(p0, p2) / d01. This will be the lerp parameter t in the range [0, 1].
Interpolate between the 2 gradient color by this t and you got the color for the given point p.
This can be implemented multiple ways. You can use OpenGL to render the image, then read the pixel buffer back to the RAM. If you are not familiar with OpenGL or the rendering process, you can write a function which takes a point (the 2D coordinates of a pixel) and returns an RGB color - so you can compute all the pixels of the image. Finally you can write the image to disk using an image format, but that's an another story.
The following are example C++14 implementations of some functions mentioned above.
Simple linear interpolation:
template <typename T, typename U>
T lerp(const T &a, const T &b, const U &t)
{
return (U(1) - t)*a + t*b;
}
, where a and b are the two values (colors in this case) you want to interpolate between, and t is the interpolation parameter in the range [0, 1] representing the transition between a and b.
Of course the above function requires a type T which supports multiplication by a scalar. You can simply use any 3D vector type for this purpose, since colors are actually coordinates in color space.
Distance between two 2D points:
#include <cmath>
auto length(const Point2 &p)
{
return std::sqrt(p.x*p.x + p.y*p.y);
}
auto distance(const Point2 &a, const Point2 &b)
{
Point delta = b - a;
return length(delta);
}
Image from https://developer.mozilla.org/en-US/docs/Web/CSS/linear-gradient
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to understand how the marching cubes algorithm works.
Source:
http://paulbourke.net/geometry/polygonise/
What i don't understand is how do you calculate the "GRIDCELL" values. To be exact the
double val[8];
part is not clear for me what it actually supposed to contain.
typedef struct {
XYZ p[8];
double val[8];
} GRIDCELL;
As i understand XYZ p[8]; are the vertex coordinates for the output cube. But what val[8]; is?
The marching cubes algorithm is -- as explained in the linked description -- an algorithm to build a polygonal representation from sampled data. The
double val[8];
are the samples for the 8 vertices of the cube. So they are not computed they are measurements from e.g. MRI scans. So the algorithm is the other way around: take a set of measured numbers and construct a surface representation for visualization from it.
Te val is the level of "charge" for each vertex of the cell, it depends of the tipe of shape that you want to creae.
f.e.: if you want to made a ball you can sample the values with the formula:
for (int l = 0; l < 8; ++l){
float distance = sqrtf(pow(cell.p[l].x - chargepos.x, 2.0) + pow(cell.p[l].y - chargepos.y, 2.0) + pow(cell.p[l].z - chargepos.z, 2.0));
cell.val[l] = chargevalue /pow(distance, 2.0);}
After further reading and research the explanation is quite simple.
First off all:
A voxel represents a value on a regular grid in three-dimensional space.
This value simply represents the so called "isosurface". Or in other words the density of the space.
double val[8];
To simplify:
Basically this should be a value between -1.0f to 0.0f.
Where -1.0f means solid and 0.0f empty space.
For ISO values a perlin/simplex noise can be used for example.
I modified a algorithm to rectif. It returns me 2 Opencv homographies (3x3 Matrixes). I can use cv::warpPerspective and get rectified images. So the algorithm works right. But I need to apply this homographies to textures in OpenGl. So I create a 4x4 Matrix (HomoGl) and I use
glMultMatrixf(HomoGl);
to apply this Tranform. To fill HomoGl I use
for(int i=0;i<3;++i){
for(int j=0; j<3;++j){
HomoGL[i+j*4] = HomoCV.at<double>(i,j);
}
}
This methode has the best result...but it is wrong. I test some other methods[1] but they doesn't work.
My Question: How can I convert the OpenCV Homography, so I can use
glMultMatrixf to get right transformed Images.
[1]http://www.aiqus.com/questions/24699/from-2d-homography-of-2-planes-to-3d-rotation-of-opengl-camera
So an H matrix is the transformation of 1 point on plane one to another point on plane 2.
X1 = H*X2
When you use warpHomography in opencv you are putting the points in the perceptive of plane 2.
The matrix (or image mat) that you get out of that warping is the texture you should use when applying to the surface.
Your extension of the 3x3 homography to 4x4 is wrong. The most naive approach which will somewhat work would be an extension of the form
h11 h12 h13 h11 h12 0 h13
H = h21 h22 h23 -> H' = h21 h22 0 h23
h31 h32 h32 0 0 1 0
h31 h32 0 h33
The problem with this approach is that while it gives the correct result for x and y, it will distort z, since the modified w component affects all coordinates. If the z coordinate matters, you need a different approach.
In this paper, an approximation is proposed which will minimize the effects on the depth (see equation 5, you also will need to normalize your homography so that h33=1). However, this approximation will only work well enough for small distortions. If you have some extreme trapezoid distorion, that approch will also fail. In that case, a 2-pass approach of rendering into the texture and and applying the 2D distortion is possible.
With the modern programmable pipeline, one could also deal with this in one pass by undistorting the z coordinate in the fragment shader (but that can have some negative impact on performance on its own).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to imitate the well-known HTML's great great RECTANGLE. I mean all of the characteristic of the rectangles like borders, border-radius, triangulated quad on corners, etc. I don't like to use any other libraries except mine. I would like to create this one for the sake of learning and experience, and also to use it for the future as a GUI system. I am working on this concept of shuffled rectangles.
It is composed of:
4 GL_TRIANGLES as quadrilateral on corners
4 arcs on corners
4 borders on all sides
And one front big rectangle on the front
And these are the outputs I made so far :)
w/o border-radius
w/ border-radius
So the things I am really really confused with are
Border-sizes
Border-locations
Is it the X, Y's or the W, H's
When to draw or not front-most rectangle
Anything I don't know yet.
...Please comment other things I should include here for clarification. Thanks!
Edit:
Hmm..., Okay as for a minimal question only. I just wanted to implement this stuffs and programmatically compute their values as I change a single attributes of the rectangle.
border-radii-sizes
border-sides
I'm putting too much images here, please please understand me :(
*left-border
*corner
I was thinking of that kind of rectangles positioning and it's really difficult in my head to compute for their coordinates or the sizes base on the set of attributes I'm gonna define on the designing part. For example, If I define the border-radius-top-left to have a value of 50% with its counter part of border-size-left with a certain value, what would be the formula I must consider. Or, must I need to add any component/ private attributes in order to make this thing happen?
Yey!! I have figured it out!!
Please let me SO to discuss my [problem solved] here.
Sorry for my unclassified art :) I made it colorful for property separation.
Explanation:
Arcs w/c serves as corner-radii.
The formula for points (x, y) will be automatically generated here
corner-radii-points (x, y) are the target.
points (x, y) Should be the only one adjusting based on the given radii values.
Curved part of this should be static in position.
Fake borders these are the inner-side-borders.
Properties of this such as [x, y, width, height] will depend on corner-radii-points(x, y) and points(x, y).
Inner quad w/c is the inner-rectangle
This will just serves as cover
Properties of this such as [x1, y1, x2, y2](this is a polygon so I labeled it like that) will depend on points (x, y)
Now I can simply do this:
Designing Part:
int w3 = rect.width >> 3;
int h3 = rect.height >> 3;
rect.setBorderRadius(C_NW, w3, h3);
rect.setBorderRadius(C_NE, w3, h3);
rect.setBorderRadius(C_SE, w3, h3);
rect.setBorderRadius(C_SW, w3, h3);
rect.setColors(C_NW, cc::getColor(COLORS::RED));
rect.setColors(C_NE, cc::getColor(COLORS::GREEN));
rect.setColors(C_SE, cc::getColor(COLORS::BLUE));
rect.setColors(C_SW, cc::getColor(COLORS::YELLOW));
rect.setBorderColor(B_TOP, cc::getColor(COLORS::WHITE));
rect.setBorderColor(B_RIGHT, cc::getColor(COLORS::WHITE));
rect.setBorderColor(B_BOTTOM, cc::getColor(COLORS::GRAY3));
rect.setBorderColor(B_LEFT, cc::getColor(COLORS::GRAY3));
rect.setBorderSize(B_TOP, 20);
rect.setBorderSize(B_RIGHT, 20);
rect.setBorderSize(B_BOTTOM, 20);
rect.setBorderSize(B_LEFT, 20);
Results:
rect is the one with Lightning McQueen image.
Those are the formulas I evaluate base on trial and error.
Also thanks to Sir Mark Garcia for helping me by demonstrating some diagrams, and suggested to use MS Paint for visualization :)
Next problem is masking as you can see that there are non-curved borders and corner radius at the same time, but I won't focus on that at this moment.
If ever someone is interested in this kind of rectangle implementation, you can PM me here and I'll give you the source code.
I'm trying to import *.x files to my engine and animate them using OpenGL (without shaders for now, but that isn't really relevant right now). I've found the format reference at MSDN, but it doesn't help much in the problem.
So - basically - I've created a file containing a simple animation of a demon-like being with 7 bones (main, 2 for the tail, and 4 for the legs), from which only 2 (the ones in the right leg) are animated at the moment. I've tested the mesh in the DXViewer, and it seems to work perfectly there, so the problem must be the side of my code.
When I export the mesh, I get a file containing lots of information, from which there are 3 important places for the skeletal animation (all the below matrices are for the RLeg2 bone):
SkinWeights - matrixOffset
-0.361238, -0.932141, -0.024957, 0.000000,
0.081428, -0.004872, -0.996669, 0.000000,
0.928913, -0.362066, 0.077663, 0.000000,
0.139213, -0.057892, -0.009323, 1.000000
FrameTransformMatrix
0.913144, 0.000000, -0.407637, 0.000000,
0.069999, 0.985146, 0.156804, 0.000000,
0.401582, -0.171719, 0.899580, 0.000000,
0.000000, -0.000000, 0.398344, 1.000000
AnimationKey matrix in bind pose
0.913144, 0.000000, -0.407637, 0.000000,
0.069999, 0.985146, 0.156804, 0.000000,
0.401582, -0.171719, 0.899580, 0.000000,
0.000000, -0.000000, 0.398344, 1.000000
My question is - what do I exactly do with these matrices? I've found an equation on the Newcastle University site (http://research.ncl.ac.uk/game/mastersdegree/graphicsforgames/skeletalanimation/), but there's only 1 matrix there. The question is - how do I combine these matrices to get the vertex transform matrix?
This post is not pretend to be a full answer, but a set of helpful links.
How to get all information needed for animation
The question is how do you import your mesh, and why do you do this. You can fight with .x meshes for a months, but this doesn't make any sense, because .x is a very basic, old and really not good enough format. You don't find many fans of .x format on StackOverflow. =)
.x file stores animation data in a tricky way. It was intended to load via set of D3DX*() functions. But, to get bones and weights from it manually, you must preprocess loaded data. Much things to code. Here is a big post, explaining how to:
Loading and displaying .X files without DirectX
Good way to do things is just switch to some mesh loading library. The most popular and universal one is Assimp. At least, look at their docs and/or source code, on how they handle loading and preprocessing, and what whey have as output. Also, here is a good explanation:
Tutorial 38 - Skeletal Animation With Assimp
So, with assimp you can stop fighting and begin animating right now. And maybe later, when you'll find idea on how it's works, you can write your own loader.
When you've got all information needed for animation
Skeletal animation is a basic topic that explained in details all around the web.
You can find basic vertex shader for animation here:
OpenGL Wiki: Skeletal Animation
Here is a explanation of how all works (but implemented in fixed-function style): Basic Bones System
Hope it helps!
Since Drop provided links that talk about the problem, and give clues on how to solve it, but don't quite provide a simple answer, i feel obliged to leave the solution here, in case someone else stumbles on the same problem.
To get the new vertex position in "bind pose"
v'(i) = v(i)*Σ(transform(bone)*W(bone,i))
where:
v'(i) - new vertex position,
v(i) - old vertex position, and
W(bone,i) - weight of the transformation.
(and of course Σ is the sum from 0 to the amount of bones in the skeleton)
The transform(bone) is equal to sw(bone) * cM(bone), where sw is the matrix found inside the SkinWeights tag, and cM(bone) is calculated using a recursive function:
cM(bone)
{
if(bone->parent)
return localTransform*cM(bone->parent);
else
return localTransform;
}
The localTransform is the matrix located inside the FrameTransformMatrix tag.
To get the new vertex position in a certain animation frame
Do the exact same operation as mentioned above, but instead of the matrix in FrameTransformMatrix, use one of the matrices inside the appropriate AnimationKey tag. Note that when an animation is playing, the matrix inside the FrameTransformMatrix tag becomes unused. Which means, you'll probably end up ignoring it most of the time.