opengl modelling rocket flame and vapour trails with particles - opengl

Does anyone have any guidance for coding an approximation for the particle stream coming out of a jet engine (with afterburner) in opengl using particles drawing using vertex buffers / 4f color buffers?
I believe there are two aspects to this problem:
The colour of the light as particles exit the jet engine as a function of temperature and some constants relating to the type of gas being burnt. This article leads me to believe I will need some sort of array for the temperature / color conversion curve. Apprently hydrogen burns at 2,660C in oxygen and 2,045C in air whereas jet fuel burns at 287.5C in air. (but the temperature of jet fighter afterburner can reach 1700C somehow)
The vapour trail behind the rocket / jet which will be either white with alpha for the water base vapour trail if the rocket is in atmosphere. Also I believe my assumption is correct that this would not be necessary for a rocket burning fuel in space. The vapour trail will simulated as tiny water droplets which are much larger than the wavelength of visible light, so they would scatter light achromatically. As water itself is colorless the resulting color would be white?
Also I am looking to model this from a birds eye perspective so it does not need to be a full 3D model. So the positions of the 10 or so pilot lights around the afterburner cone for example could just be approximated as maybe 5 linear points.

Depending on the level of detail you require, you may want to simple use a textured cone coming out of the yet engine. If you want to go for a full-blown particle system (which for a jet engine does not appear necessary to me) then you might want to give each particle on the stack a bunch of properties like speed (vec3), size, gas type and age.
Make a loop to process each particle each time your game loop goes around. For each tick, your simulation would then change the speed and size as the particle gets older. You should make a functions that determines the look of the particle according to its age and gas type.
At its simplest, this could make colored particles fade, enlarge and speed down as it gets older. Is this what you are looking for?

Related

Calculation screen position from latitude and longitude

I have a video camera at a high building (eg. 10 storey high) looking at an area. The video screen is stream to display on my laptop.
If I know a building’s latitude/longitude, how can I calculate what position (in pixels) I can mark out a cross on my display video to indicate the position of the building?
What you are looking for is Augmented Reality.
You would need to know what the focus of the camera which is/was a non-trivial task. I wrote one many years ago for a professional Sony DVCAM system that used error diffusion pulled from Numerical Recipes book. This took a long time to set up and calibrate but once calibrated the focal length was fixed.
Modern systems use a half-cube system in which software knows the size of the cube and thus can infer the focal length, fov and other aspects of the camera. Again not a trivial operation but a lot quicker than my method. I read a paper by a canadian guy who had a virtual tank driving over objects in photos of Greece IIRC. This was in 2001 so forgive me if the name of the paper slipped my mind.
Another aspect you have to consider is the angle and orientation of the camera you have. While you know the GPS of the camera, you need to know the orientation. You can get cheapo electronic compass that will give you the best-guess direction but what you need is an IMU. They can range from a few quid for Arduino ones to many thousands for industrial. Since you have the pull and rotation of the earth as constants you can infer from the strain placed on the axes of the IMU what its orientation is.
The ideas which were difficult then are probably common place now. Your best bet would be to read through some papers on Augmented Reality.

3D dice roll in Lwjgl and OpenGL

I want to have a 3d dice that can be dropped onto a surface and land face down. I also want to give the cube a random rotation velocity so the cube rotates in mid air before landing so I get a random result.
I've looked around but I cant find anything on the subject.
I know how to render and spawn the dice as well as have it affected by gravity and how to give it a random rotation velocity as well as how to stop it once it hits a surface but how can I make sure it lands face down and then how can I tell which face is facing upward so I can get the value that the dice landed on?
What you want is rigid body physics simulation. I would recommend using a physics simulation library, such as bullet.
Physics libraries usually provide functions to know if an object is "sleeping" (not moving). This can be used to trigger the dice value readout.
To know which face is facing upward, you can get the transformation matrix M of your cube, apply it to a unit vector and look where this result vector is pointing to.
If you prefer implementing the physics by yourself, those papers are really good to know the basics of rigid body simulation :
https://www.cs.cmu.edu/~baraff/sigcourse/notesd1.pdf
https://www.cs.cmu.edu/~baraff/sigcourse/notesd2.pdf

Detect person in bed

Suppose I want to find out if there is a person in a bed or not using cameras and computer vision algorithms. One can assume that the camera provides RGB, infrared and depth data.
I don't really have a good idea how to solve this. So far I came up with this:
Estimate a plane using RANSAC of the bed object. This plane should be further away from the ground plane, if there is a person in the bed. This seems very unstable though, assumes that the normal height of a bed is known and can easily be broken if the bed has an adjustable head part (e.g. in a hospital)
Face detection. Try to detect a face in the bed. Probably also isn't very reliable since the face can be sideways to the camera and partly covered.
Use the infrared-image. I am not sure how much you would see through the blanket and what would happen if the person just left the bed and the bed is still warm?
Is there a good way to do this? Or, to be reliable, you would have to use pressure sensors in the bed?
Thanks!
I dont know about infrared images but for camera based video processing this kind of problem is widely studied.
If your problem is to detect a person in a bed which is "Normally empty" then I think the simplest algorithm would be to capture successive frames and calculate their difference.
The existence of human in the frame would make it different from a frame capturing only empty bed. Depending on various algorithms like this you would get different reliability.
Otherwise you can go directly for human detection in video frames. One possible algorithm is described here.
Edit:
Your problem is harder than i thought. The following approach might solve the cases.
The main idea is to use bunch of features at once to get higher accuracy and remove false positives.
Use HOG person detector at top level to detect a person's entry in the scene. If the position of the possible entry doors are known or detectable using edge lines in the scene use it to increase accuracy. (At the point of entry the diference in successive frames will be located near the doors)
Use Edge lines to track the human. And use the bed edges to track the position of the human. The edges of human should be bounded by the edges of the bed.
If the difference is located within the bed implies human is in the bed but moving.
If needed as a preprocessing step include analysis of texture, connected component to remove possible moving objects in the room for higher accuracy (for example:- movement of clothes because of air).
Also use face detectors to increase accuracy.
Infrared that camera uses has a different frequency than infrared signal from a warm object. Unless you are using military grade IR scanners you can forget about connection IR-warmth. But IR is still useful if there is limited light or you use it for depth maps.
Go with depth (Kinect style) and estimate bed as a segment at your image. It should have some features in depth (certain dimension, flatness, etc). The bed usually surrounded by walls or floor that are easy to segment out. You algorithm can also be tuned to the distance to the bed and cut it out based just on depth range.
As other people said, it will be useful to learn more about your particular goal or application. What is background or environment around the bed? how does it looks when there is no person in it? Can a person simulate his/her presence(as in prison escape scenario), etc. etc.

Collision Detection between quads OpenGL

I am making a simple 3D OpenGL game. At the moment I have four bounding walls, a random distribution of internal walls and a simple quad cube for my player.
I want to set up collision detection between the player and all of the walls. This is easy with the bounding walls as i can just check if the x or z coordinate is less than or greater than a value. The problem is with the interior walls. I have a glGenList which holds small rectangular wall segments, at the initial setup i randomly generate an array of x,z co ordinates and translate these wall segments to this position in the draw scene. I have also added a degree of rotation, either 45 or 90 which complicates the collision detection.
Could anyone assist me with how I might go about detecting collisions here. I have the co ordinates for each wall section, the size of each wall section and also the co ordinates of the player.
Would i be looking at a bounded box around the player and walls or is there a better alternative?
I think your question is largely about detecting collision with a wall at an angle, which is essentially the same as "detecting if a point matches a line", which there is an answer for how you do here:
How can I tell if a point belongs to a certain line?
(The code may be C#, but the math behind it applies in any language). You just have to replace the Y in those formulas for Z, since Y appears to not be a factor in your current design.
There has been MANY articles and even books written on how to do "good" collision detection. Part of this, of course, comes down to a "do you want very accurate or very fast code" - for "perfect" simulations, you may sacrifice speed for accuarcy. In most games, of the players body "dents" the wall just a little bit because the player has gone past the wall intersection, that's perhaps acceptable.
It is also useful to "partition the space". The common way for this is "Binary space partitioning", which is nicely described and illustrated here:
http://en.wikipedia.org/wiki/Binary_space_partition
Books on game programming should cover basic principles of collision detection. There is also PLENTY of articles on the web about it, including an entry in wikipedia: http://en.wikipedia.org/wiki/Collision_detection
Short of making a rigid body physics engine, one could use point to plane distance to see if any of the cubes corner points are less than 0.0f away from the plane (I would use FLT_MIN so the points have a little radius to them). You will need to store a normalized 3d vector (vector of length 1.0f) to represent the normal of the plane. If the dot product between the vector from the center of the plane to the point and the plain normal is less than the radius you have a collision. After that, you can take the velocity (the length of the vector) of the cube, multiply it by 0.7f for some energy absorption and store this as the cubes new velocity. Then reflect the normalized velocity vector of the cube over the normal of the plane, then multiply that by the previously calculated new velocity of the cube.
If you really want to get into game physics, grab a pull from this guys github. I've used his book for a Physics for games class. There are some mistakes in the book so be sure to get all code samples from github. He goes through making a mass aggregate physics engine and a rigid body one. I would also brush up on matrices and tensors.

What is the correct field of view angle for human eye?

At the moment i use 45 degree angle for my gluPerspective(). Is this the correct angle to make the rendering look realistic in how humans perceive it? Also there is an issue with the window aspect ratio, for example 2:1 window will make 45 degree angle look more like 80 degree angle on a screen with 3:4 ratio etc. So the window size changes the perspective as well.
So what is the correct window size ratio and field of view angle for making a game look most realistically compared to how humans perceive the world?
Unless you're using some kind of wrap-around setup, a single monitor isn't going to fill up the entire field of view of the human eye, which is usually nearly 180 degrees horizontally (of course it varies from person to person). If you tried to render something that wide, it would look weird -- the scene would appear to stretch out excessively toward the edges. Set the FOV to 120 degrees or so and you'll see what I'm talking about.
So instead of considering the human eye's FOV, you usually just draw imaginary lines from the user's head to the edges of the monitor, and take the angles between those. Of course this varies from desk to desk, monitor to monitor, so it's something of an artistic decision. 70 degrees vertical is decent place to start with. Assuming the game is running full screen, you're basically at the mercy of the monitor itself for the aspect ratio.
In general, unless you have very specific needs, you should give the user the option to change the FOV as they see fit.
You also generally do not want to have your monitor's FOV conform to the human range of FOV. After all, the monitor only covers part of a human's visual range; even if they're not paying attention to anything else, they still see everything around it. Most people are fine with having their monitor be a portal-like view of a world.
its around 87
most games seem to use a random center point, so eye to monitor edge is not correct either.
there should also be an adjustment for depth so the screen appears to be no more than a glass pane.
you know when you find one that works, your brain takes it all in very easily and your aim will be perfect as your muscle memory can react instantly.
(guess what, I'm not fine with a portal view :)