Simple yet realistic billiard ball acceleration [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
I have a simple 2D game of pool. You can give a ball some speed, and it'll move around and hit other balls. But I want a ball to stop eventually, so I added some acceleration, by running this code every frame:
balls[i].ax = -balls[i].vx * 0.1;
balls[i].ay = -balls[i].vy * 0.1;
...
if(hypot(balls[i].vx, balls[i].vy) < 0.2){
balls[i].vx = 0;
balls[i].vy = 0;
}
And it works... But I find it weird, not realistic. I have no physics knowledge, but I'm pretty sure friction should not depend on speed.
How can I improve physics of slowing down without too much complexity?

The rolling friction formula is this: F_k,r​=μ_k,r_​Fn. It only factors in the properties of the surface (μ_k) and the force on the ball (r_​Fn). This should decelerate with a constant value, just adjust it until it looks roughly correct.
Example code:
x = 1 // mess around with this until it looks right
if (ball.xVelocity > x) { ball.xVelocity -= x } else { ball.xVelocity = 0 }
if (ball.yVelocity > x) { ball.yVelocity -= x } else { ball.yVelocity = 0 }

Related

detected collision, what now? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
i'm checking collision between the player and every other object and it works, but what do i do now.
this is what i do now, but when the player hits something from below it teleports on top of the hit object.
do i say: teleport on top if fell onto object, set max y velocity to 0 when hit from below and max x velocity for the sides?
is that the way to do it?
but this wouldnt work with circle colliders, so how do i get my objects to stop normally on hit?
for (GameObject &g : gameObjects)
{
if (BoxCollision(player, &g))
{
player->velocity.y = 0;
// set player feet location to the top of hit object
player->transform.pos.y = g.transform.pos.y - player->sprite.height * player->transform.size.y;
player->canJump = true;
}
}
collision check:
bool BoxCollision(GameObject* g1, GameObject* g2)
{
if (g1 != g2)
{
bool left = 0, right = 0, top = 0, bottom = 0;
left = g1->transform.pos.x < g2->transform.pos.x + g2->sprite.width * g2->transform.size.x ? true : false;
right = g1->transform.pos.x + g1->sprite.width * g1->transform.size.x >= g2->transform.pos.x ? true : false;
bottom = g1->transform.pos.y + g1->sprite.height * g1->transform.size.y > g2->transform.pos.y ? true : false;
top = g1->transform.pos.y < g2->transform.pos.y + g2->sprite.height * g2->transform.size.y ? true : false;
return left && right && bottom && top ? true : false;
}
else return false;
}
When you detect a collision, it is because the bounding volumes of two objects O1 and O2 intersect after a time step of your engine. In other words, your object O1 started at position x in the previous timestep, and now at x+v*dt the two objects intersect. (where v=velocity and dt is your timestep)
The first course of business is to find a dt' (<dt) where objects O1 and O2 just touch. Once you have that, the decision is yours what to do:
You can set v=0 after the collision to have object O1 stop dead;
You can invert v (after reflection along the bounding box) to have O1 bounce away
You can transfer some velocity to O2 and calculate motion for the remainder of the timestep
You can destroy O2 and have O1 continue along its path.

How do you convert natural units of world coordinate system to opengl unit? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am building a physics engine, and therefore, I am also learning opengl to be able to visualize what the physics engine is doing. I am wondering how I convert natural units (e.g. 1 meter, 1 inch, etc.) to opengl units.
I have done some research, and it seems that the opengl unit is not really defined. Does this mean that I could map the number 0.01f to be equivalent to 1 cm. Therefore, if I had a circle centered at cx and cy, and I wanted it to drop by 1 cm, then I could do the following?:
float cx = 0.05f;
float cy = 0.05f;
cy -= .01;
Here is some snippets of the toy code for the graphics so far:
void drawCircle() {
float r = 0.05;
int num_segments = 1000;
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINE_LOOP);
float thetaStep = 2.0f*M_PI/num_segments;
for(float theta = 0; theta < 2.0f*M_PI; theta += thetaStep) {
float x = r * cosf(theta);
float y = r * sinf(theta);
glVertex2f(x+cx, y+cy);
}
glEnd();
glFlush();
}
GLvoid Timer(int value) {
cy -= 0.01;
if(value) {
glutPostRedisplay();
}
glutTimerFunc(30, Timer, value);
}
The above code snippets work. It will move the ball down at a constant rate of 0.01 opengl units.
Since the opengl coordinate system is between the values -1 and 1, it seems the correct way is to scale. If I want the world to be on a scale of 300 meters, then it becomes between -300 and 300. To convert to opengl coordinate you simply divide the natural coordinate by the scaler. For example, if I had a circle at position <0,100m> in the world coordinate system, you would convert to opengl coordinate system by dividing each component by 300; that is assuming that the opengl coordinate system is on a scale of 300 meters. This would result to opengl coordinate <0, .3333>.
The units you use are entirely up to you. If you want to treat 1 unit as a meter, or an inch, or a mm, that's entirely up to you. If everything is scaled accordingly, it won't make a difference to OpenGL, so choose the unit that makes sense wrt your physics engine and use those (so that you avoid the need to constantly keep converting units).
The only consideration is really whether your units can accommodate floats, or if you need to fall back to double for position values. For example, if using floats for position, and assuming 1 GL unit = 1 meter, then you'll run out of millimetre precision at about 25km from the origin.
As for saying OpenGL has a range between -1 and 1, that's entirely incorrect. That's the range of values once they have been transformed into screen space. It doesn't make any difference whether you scale those between -FLT_MAX and +FLT_MAX, or within a range of -10 to +10. You'll have the exact same result.

Ball to fixed ball collision [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am trying to implement balls that change their direction when they hit an obstacle(in this case a fixed ball). I can detect when a collision occurs but I don't know how to modify the direction of a ball once it hits an obstacle. Here's some code:
struct Vec2
{
float x, y;
};
struct Ball
{
void onCollision(const Ball& fixedBall)
{
// This function will be called when a collision occurs
// Speed will be constant, only direction needs to change
}
void update()
{
position += direction * speed;
}
Vec2 position, direction; // direction is a normalized vector
float speed, radius;
};
You will need to invert the speed by negating it.
if (collision)
speed = speed * -1

SDL vector subscript out of range [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 6 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Improve this question
I'm getting a 'vector subscript out of range' error. I know this is caused by an indexing issue where the index is larger than the maximum size of the array/collection. However, I can't figure out why it's getting to that stage, as I am only ever incrementing the value by one, once in the entire project, and if it becomes larger than the size of the array, I reset it to 0. This is in regards to the frames of an animation in SDL. The index variable in question is m_currentFrame.
Here is the 'Process' method for the animated sprite, this is the only place in the entire project that calls 'm_currentFrame++', I did a ctrl+f search for it:
void
AnimatedSprite::Process(float deltaTime) {
// If not paused...
if (!m_paused){
// Count the time elapsed.
m_timeElapsed += deltaTime;
// If the time elapsed is greater than the frame speed.
if (m_timeElapsed > (float) m_frameSpeed){
// Move to the next frame.
m_currentFrame++;
// Reset the time elapsed counter.
m_timeElapsed = 0.0f;
// If the current frame is greater than the number
// of frame in this animation...
if (m_currentFrame > frameCoordinates.size()){
// Reset to the first frame.
m_currentFrame = 0;
// Stop the animation if it is not looping...
if (!m_loop) {
m_paused = true;
}
}
}
}
}
Here is the method (AnimatedSprite::Draw()), that is throwing the error:
void
AnimatedSprite::Draw(BackBuffer& backbuffer) {
// frame width
int frameWidth = m_frameWidth;
backbuffer.DrawAnimatedSprite(*this, frameCoordinates[m_currentFrame], m_frameWidth, m_frameHeight, this->GetTexture());
}
Here is a screenshot of the exact error:
error
if (m_currentFrame > frameCoordinates.size()){
// Reset to the first frame.
m_currentFrame = 0;
You already need to reset when m_currentFrame == frameCoordinates.size(), because the highest index of an array is its size minus one (counting begins at 0).

What is the "easiest" way to find the number of dark pixels of a jpeg? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm a first year engineering student and I'm working on a end of term project. Due to tight deadlines, I would like to avoid rummaging through image processing libraries. We (my group mates) need to find the easiest implementable method to get an integer for the number of dark pixels from an image. I have read many other posts regarding image processing, but they are much more complicated than we need. Is there an easy way to do this? It is important that it is easy because this is only a small part of our project and there can't be too much time committed to this.
As for languages, I would prefer to use C++.
On a side note, any exceptional help given would be cited in our report (just mention the name you want to be cited as and you'll go down in history). It would also give us time to sleep. Sleep is to engineering students what cake is to fat kids.
Here is it done in Qt (not image processing but application library)
#include <QImage>
#include <QColor>
uint countDarkPixels(QString filename, quint8 threshold) {
QImage img(filename);
uint darkPixels = 0;
for (int x = 0; x < img.width(); ++x) {
for (int y = 0; y < img.height(); ++y) {
QColor color(img.pixel(x, y));
if (color.toHsl().lightness() < threshold) darkPixels++;
}
}
return darkPixels;
}
Works for other formats besides JPG too. It uses conversion to HSL which may not be very fast, but you said "easy" not "fast".
There are two stages to this:
Load an image from a file.
Determine how many pixels in that image are "dark".
The first stage isn't too difficult - you could either use a pre-existing library, such as DevIL or FreeImage, or write your own - this and this should be enough to get you started.
Once you've loaded the image into your program somehow, you'll need to loop over the pixel data and count the number of "dark" pixels. Let's say you have an image structure that looks like this:
typedef struct
{
int w;
int h;
unsigned char *data;
} image_s;
For simplicity, let's make the following assumptions:
The image is stored in 24-bit, RGB format, so that each pixel is represented as three unsigned bytes like this: RGBRGBRGB.
A "dark" pixel is one where (R+G+B)/3 < 10
Given the above, you would simply need to loop through each pixel within the image structure like so:
int count_dark_pixels(image_s *img)
{
int dark_pixels, i;
for (dark_pixels = 0, i = 0; i < img->w * img->h; ++i)
{
int r = img->data[(i*3)+0];
int g = img->data[(i*3)+1];
int b = img->data[(i*3)+2];
if ((r+g+b)/3 < 10) { ++dark_pixels; }
}
return dark_pixels;
}
Uncompress the jpeg, get the Y channel pixel data (these values are the luminosity of each pixel), count the dark pixels in that. I don't think you need the U and V channels, these are used to reconstruct the colour information.
Working RGB may be a pain, but it all depends on what you mean by a 'dark' pixel.
JPEG images are usually encoded using the YCbCr color space. Rather than Red, Green, Blue the three components are Darkness, Blueness, and redness. The Y component is then a black and white version of the color image.
You can then determine the darkness of any point by examining the value of the Y component of the image. You can set some threshold to determine a dark pixel.