I am coding Tetris in Qt C++ at the moment. The game is almost completed and the only thing I need to add is the rotation. Now what I am struggling with, is the theory behind the rotation. Tetris has 7 different kind of stones ( I, S, Z, L, J, T, O ). Is there any algorithm or anything similar with the rotations from the different shapes ?
What I prefer not to do is having a switch case for 7 different shapes to handle the rotations. Also if a shape like L is rotated it has 4 different positions, which have to be handled different.
So the only thing I have thought of yet is to ask for the shape and then for the position. This would grant me some switches or else if's in a switch... Means a lot to type and a lot to check for the compiler.
P.S. My Stone structure looks like this:( Steine = german for stone, Art = shape )
struct position
{
int X;
int Y;
};
struct Steine
{
struct position* Position;
int Art;
};
You could use a 2D array of bool representation for each shape. Then when you rotate some specific array, you rotate that shape (maybe have the code at the initialization generate all the rotations) and check if any pixel is outside the Tetris borders or if the rotated shape should not be rotated because some of it's pixel would be on the same position as some already existing pixel from previous shapes.
Edit: Yeah, like you said yourself, best is to try on paper/paint to check it out (about the middle point for the rotation). For every shape you then end up with a 3x3 or 4x4. for 3x3 you rotate around it's middle point, for 4x4 around 1x1 for example (where index goes from 0 to 3). That is somewhat how I went for my Tetris 9 years ago or so.
Related
I am in a lost. I have been trying to implement this code at:http://www.blackpawn.com/texts/pointinpoly/default.html
However, I don't know how is it possible that the cross-product present there between two 2D vectors can result also in a 2D vector. It does not make sense to me. That is also present in some examples of intersection between polygons and lines, in the fine book "Realtime Collision Detection" - where even scalar triples between 2D vectors appear in the codes (see page 189, for instance).
The issue is that, as far as I can think of it, the pseudo cross-product of two 2D vectors can only result in a scalar (v1.xv2.y-v1.yv2.x) or at most in a 3D vector if one adds two zeros, since that scalar represents the Z dimension. But how can it result in a 2D vector?
I am not the first one to ask this and, coincidently, when trying to use the same code example: Cross product of 2 2D vectors However, as can be easily seen, the answer, the original question when updated and the comments in that thread ended up being quite a mess, if I dare say so.
Does anyone know how should I get these 2D vectors from the cross-product of two 2D vectors? If code is to be provided, I can handle C#, JavaScript and some C++.
EDIT - here is a piece of the code in the book as I mentioned above:
int IntersectLineQuad(Point p, Point q, Point a, Point b, Point c, Point d, Point &r)
{
Vector pq = q - p;
Vector pa = a - p;
Vector pb = b - p;
Vector pc = c - p;
// Determine which triangle to test against by testing against diagonal first
Vector m = Cross(pc, pq);
float v = Dot(pa, m); // ScalarTriple(pq, pa, pc);
if (v >= 0.0f) {
// Test intersection against triangle abc
float u = -Dot(pb, m); // ScalarTriple(pq, pc, pb);
if (u < 0.0f) return 0;
float w = ScalarTriple(pq, pb, pa);
....
For the page you linked, it seems that they talk about a triangle in a 3d space:
Because the triangle can be oriented in any way in 3d-space, ...
Hence all the vectors they talk about are 3d vectors, and all the text and code makes perfect sense. Note that even for a 2d vectors everything also makes sense, if you consider a cross product to be a 3d vector pointing out of screen. And they mention it on the page too:
If you take the cross product of [B-A] and [p-A], you'll get a vector pointing out of the screen.
Their code is correct too, both for 2d and 3d cases:
function SameSide(p1,p2, a,b)
cp1 = CrossProduct(b-a, p1-a)
cp2 = CrossProduct(b-a, p2-a)
if DotProduct(cp1, cp2) >= 0 then return true
else return false
For 2d, both cp1 and cp2 are vectors pointing out of screen, and the (3d) dot product is exactly what you need to check; checking just the product of corresponding Z components is the same. If everything is 3d, this is also correct. (Though I would write simply return DotProduct(cp1, cp2) >= 0.)
For int IntersectLineQuad(), I can guess that the situation is the same: the Quad, whatever it is, is a 3d object, as well as Vector and Point in code. However, if you add more details about what is this function supposed to do, this will help.
In fact, it is obvious that any problem stated in 2d can be extended to 3d, and so any approach which is valid in 3d will also be valid for 2d case too, you just need to imagine a third axis pointing out of screen. So I think this is a valid (though confusing) technique to describe a 2d problem completely in 3d terms. You might yourself doing some extra work, because some values will always be zero in such an approach, but in turn the (almost) same code will work in a general 3d case too.
I have a working class that generates regular polygons given: polygon center and polygon radius and number of sides. Implementation details of the two private member functions here.
The class interface looks like this:
class RegularPolygon: public Closed_polyline{
public:
RegularPolygon(Point c, int r, int n)
: center(c), radius(r), sidesNumber(n)
{ generatePoly(); }
private:
Point center;
int radius;
int sidesNumber;
void generatePoly();
void rotateCoordinate(Point& axisOfRotation, Point& initial,
double angRads, int numberOfRotations);
};
Problem:
I am asked to implement a second way of generating regular polygons by using
a set of coordinates1. The constructor needs firstly to perform a validity check of the passed coordinates:
RegularPolygon(vector<Point>& vertices)
:center(), radius(), sideNumber()
{
// validity check of the elements of vertices
}
My initial thought is to:
Check if each pair of coordinates produces the same side length.
Check for each lines'(generated by a pair of coordinates) relative orientation. (they should be at an angle 360/polygon sides, from each other)
Question:
How could I check if all lines are properly oriented, i.e. their relative orientation? solved
Is there any standard algorithm that can determine if a set of coordinates are vertices of a regular polygon?
Note:
After checking [1] and all the question and answers regarding generating coordinates. I didn't found what I'm searching for.
1 In clockwise sequence, passed with the vector: vertices
All the additional files for compilation could be found: here. The FLTK could be found here.
Your task would be a lot simpler if you could find the center of your polygon. Then you would be able to check the distance from that center to each vertex to verify the placement of the vertices on the circle, and also to check the angles from the center to each individual vertex.
Fortunately, there is an easy formula for finding the center of a polygon: all you need to do is averaging the coordinates in both dimensions. With the center coordinates in hand, verify that
The distance from the center to each vertex is the same, and
The angle between consecutive vertices is the same, and that angle is equal to 2π/N radians
These two checks are sufficient to ensure that you have a regular polygon. You do not need to check the distances between consecutive vertices.
My question: How to convert a 3D coordinate on a 2D screen?
I red a lot about that but all my research just showed half answered or even unanswered replies, some were wrong (tested them) so I ask again and try to add as much detail as possible.
Here are the structures we will work with:
struct COORD3D
{
int X;
int Y;
int Z;
};
struct PAN3D//Rotate around these axis
{
float X;
float Y;
float Z;
};
struct COLOR
{
Uint8 R;//RED
Uint8 G;//GREEN
Uint8 B;//BLUE
Uint8 A;//ALPHA
};
struct CAMERA
{
COORD3D Pos;
PAN3D Rot;
float angle;
int RESX;//X Resolution
int RESY;//Y Resolution
};
struct POI
{
COORD3D Pos;
COLOR Col;
};
struct OBJECT
{
COORD3D Pos;//absolute position
list<POI> dots;//relative to object position
list<pair<POI*, POI*>> lines;//Pointers to the dots to connect them
PAN3D Rot;//absolute rotation
};
Now what I need is a function that looks like:
POI Convert3dTo2d(CAMERA cam, POI point);
(must be a "POI" because of the color ;) )
I already got an alogythm that goes through all objects and all of their points
And the fact that there is a camera tells you that it's not an orthograhic voew but a perspective.
Please comment the code you write here ropperly so everyone can understand it.
If you got no clue of how to do this or just got approaches or non direct solutions, please don't answer, that doesn't really help us.
http://sta.sh/0de60ynp9id <- This image should describe it a bit
Os (Windows 7), Microsoft Visual Studio 2013(I'm just using the c++ part of it)
I Build it for x64 (if it is important ;) )
But I don't think that is important for a little bit mathematic algorythms
If you got any questions, feel free to ask me
Okay, I think I got a new( to me new) way to do this, gonna try it tomorrow and will tell you if it work's (that's the part that everyone forgets but I try not to forget it)
You need to multiply your point by a view and projection matrix.
A view matrix translates the point into camera space. Aka, relative to CAMERA.
A projection matrix transforms the point from 3D space into a projection space. You'll need to decide what sort of projection you want. For example, orthographic projection or perspective projection.
See the matrix at the bottom of these pages for the layout of these matrices.
LookAtLH, or the view matrix:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb281710(v=vs.85).aspx
OrthoLH, or the projection matrix using orthographic projection:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb281723(v=vs.85).aspx
You'll also need to look into how to perform matrix multiplication.
The only way I can interpret this question is that you want to project a 3d point(s) onto a 2d plane. If that's not what you're looking for, clarify your question. If that is what you want, countless resources for 3d projection are around: http://en.wikipedia.org/wiki/3D_projection
You will need to multiply your points by a projection matrices to project(or "convert"?) your points on a 2d plane.
I suggest you can look at the following links for an explanation of transform 3D coordinates to 2D coordinates,
The OpenGL transform pipeline
OpenGL transform
I'm experimenting with OpenGL 3.2+ and have starting loading Obj files/models in 3D and trying to interact with them.
(following tutorials from sources like this site)
I was wondering what the easiest way (if it's possible) to set up collision detection between two existing(loaded) Obj objects/Models without using third party physics engines etc?
The easiest possible algorithm that can meet your criteria detects collision between spheres, that concludes your meshes. Here you can see the implementation example.
Simplest collision model is to use bounding boxes for collision. The principle is simple: You surround your object by a box defined by two points, minimum and maximum. You then use these points to determine whether two boxes intersect.
In my engine the structure of bounding box and collision-detection method are set as this:
typedef struct BoundingBox
{
Vector3 min; //Contains lowest corner of the box
Vector3 max; //Contains highest corner of the box
} AABB;
//True if collision is detected, false otherwise
bool detectCollision( BoundingBox a, BoundingBox b )
{
return (a.min <= b.max && b.min <= a.max);
}
Other simple method is to use spheres. This method is useful for objects that are of similar size in all dimensions but it creates lots of false collisions if they are not. In this method, you surround your object by sphere with radius radius and center position position and when it comes to the collision, you simply check whether the distance between centers is smaller than sum of the radii and that case two spheres intersect.
Again, code snippet from my engine:
struct Sphere
{
Vector3 position; //Center of the sphere
float radius; //Radius of the sphere
};
bool inf::physics::detectCollision( Sphere a, Sphere b )
{
Vector3 tmp = a.position - b.position; //Distance between centers
return (Dot(tmp, tmp) <= pow((a.radius + b.radius), 2));
}
In code above Dot() computes the dot product of two vectors, if you dot vector with itself it gives you (by definition) the magnitude of the vector squared. Notice how I am actually not square-rooting to get the actual distances and I am comparing the squares instead to avoid extra computations.
You should also be aware that neither of these methods are perfect and will give you false collision detection (unless the objects are perfect boxes or spheres) from time to time but that is the trade-off of the simple implementation and computation complexity. Nevertheless it is good way to start detecting collisions.
I'm currently working on a private project which depends on some operations on polygons using the Boost C++ Libraries.
I'm currently trying to work with the inner polygon/negative polygon concept.
What I need to do now is to join three polygons where two of them have a positive (counterclockwise) outer polygon and an negative (clockwise) inner polygon.
The third one is a negative polygon a new polygon object with a negative area - points in clockwise direction. And this is the point where I'm not fully sure how to handle the situation.
Here's a picture of those three polygons. The middle one which connects the left upper polygon with the right lower one is the negative one.
Now what I would like to do is to join all three polygons through the union function.
What I expect union to do is to cut away the positive parts of the polygons 1 and 3 (the positive polygons) and return the remaining two polygons of 1 and 3.
What I actually get are my polygons 1 and 3 untouched as there would be no negative polygon 2.
Any help will be appreciated.
Edit:
What I need to get is a vector not a bitmap or a picture or whatever.
These Picture are just used to better visualize what I have and what I need.
Those three Polygons are actually not more than an vector of x and y points.
Here's a picture of what I would expect to be the correct result of union of all three polygons:
Edit2: Corrected the result
How do you want unions to work? Usually a union of polygons 1 and 2 would result in polygon 3, but I suspect for your use case you want it to result in polygon 4. If that's the case, you can simply do a union of all the clockwise paths, then do a union of the counterclockwise paths, then take the difference of the former from the latter. If you want the union to result in polygon 3, then I don't think there's a consistent way to do what you want.
Good plan is to consider your polygons as a bitmap (of booleans):
Every polygon will be blit to a bitmap of type (R,R)->bool. Once it's in bitmap format, negative polygons are just andnot-operations on the booleans:
class Bitmap { virtual bool Map(float x, float y) const=0; };
class AndNot : public Bitmap {
public:
AndNot(Bitmap &bm1, Bitmap &bm2) : bm1(bm1), bm2(bm2) { }
bool Map(float x, float y) const {
return b1.Map(x,y) && !b2.Map(x,y);
}
private:
Bitmap &bm1, &bm2;
};