Physics : pressure and height - height

If the pressure of air inside a ballon is H, so what is the depth at which the balloon dive under water surface to make the volume of the air decreases to its third?

After some searching for formulas i come up with
2H/9 806.65
H must be expressed in SI units for this calculation to work i.e. pascals (newtons per square meter)

Related

AprilTag Localization Expected Acuracy

I am using the University of Michigan AprilTag library for localizing objects and am seeking advice for meeting my localization accuracy goals. I am using a 0.4 MegaPixel camera, on tags that are roughly 7.5 cm wide from distances of 0.1-1.5 meters away. I have used MatLab to calibrate my camera intrinsics and distortion coefficients.
Desired Outcome
I would like to be able to localize tags to within 5 mm accuracy.
Observed Outcome
As I move the camera relative to the tag, the localization results vary. For every 100 cm I move away from the tag, I find drift in the projected location of the tag in the world of about 10cm.
What is a reasonable expectation for the accuracy of the my localization? What actions can I take to reduce the drift I am observing?
If the drift mainly appears in the Z component of the TVEC and the error increases more or less linearly it is a sure sign that the focal length (fx & fy in the camera matrix) of your calibration is off.
Try the following:
check your calibration board: Is the size of the grid correct? Make sure that your printer does not scale the original file
make sure that the calibration board is fixed on a sturdy, flat surface
calibrate again and check if the values of fx and fy have changed (entries (0,0) and (1,1) in the camera matrix).
use at least 50 pictures, vary the board's angle and remove all pictures showing motion blur before calibrating
also check your detection parameters: You can try to activate para.cornerRefinementMethod = cv2.aruco.CORNER_REFINE_APRILTAG to improve corner accuracy (if you are using c++, adjust the command accordingly).
(too long for a comment, so I have to post it as another answer:) This will depend on the pixel size of your sensor and the focal length of your lens (which will "scale" your actual pixel size to a "projected" pixel size). As the effective resolution changes with the distance, a safe estimate would be to use the 1.5 m effective pixel value. In terms of pixels I would not trust marker corner accuracies below 0.3 px as there seems to be an issue with subpixeling accuracy, when rotating the marker (see my open question: Understanding openCV aruco marker detection/pose estimation in detail: subpixel accuracy). Tilting the marker will also degrade the accuracy as the precision of the determined rotation (rvec of the pose) is usually only within a few degrees. If small angles (say e. g. tilted only by 2°) occur, the pose might not reflect that and thus the marker will appear smaller and the distance will thus be over-estimated. In a flat setup (provided you are not using a wide angle lens) you might be able to get the 5 mm accuracy with a sensor > 5 MPx. But taking into account tilt & rotation of the marker, I am not sure if it will suffice...

OpenGL single circular wave

I am trying to implement a circular wave in OpenGL using Gerstner wave function:
It works fine but all my ground is deformed with more waves and all I want is a single wave propagating from the center to the exterior. I tried to modify the z position for the points where sqrt(x*x + y*y) is between two circles to limit the boundaries of a wave. The two circles the first circle has the radius of wavelength * time and the second has the radius of wavelength * (time + 1), but this still isn't good enough because the circles are expanding slower than the speed of the wave.
When you calculate the boundaries by wavelength * time, you're assuming the wavefront is moving with a speed equal to the wavelength; after 1 unit time, the wavefront will have traveled wavelength units in your calculation.
Try using the wave speed that you used to calculate $\varphi$ instead of wavelength there.

Finding distance of rectangle with known aspect ratio in OpenCV

I'm working on an OpenCV program to find the distance from the camera to a rectangle with a known aspect ratio. Finding the distance to a rectangle as seen from a forward-facing view works just fine:
The actual distance is very close to the distance calculated by this:
wtarget · pimage
d = c ——————————————————————————
2 · ptarget · tan(θfov / 2)
Where wtarget is the actual width (in inches) of the target, pimage is the pixel width of the overall image, ptarget is the length of the largest width (in pixels) of the detected quadrilateral, and θfov is the FOV of our webcam. This is then multiplied by some constant c.
The issue occurs when the target rectangle is viewed from a perspective that isn't forward-facing:
The difference in actual distance between these two orientations is small, but the detected distance differs by almost 2 feet.
What I'd like to know is how to calculate the distance consistently, accounting for different perspective angles. I've experimented with getPerspectiveTransform, but that requires me to know the resulting scale of the target - I only know the aspect ratio.
Here's what you know:
The distance between the top left and top right corners in inches (w_target)
The distance between those corners in pixels on a 2D plane (p_target)
So the trouble is that you're not accounting for the shrinking distance of p_target when the rectangle is at an angle. For example, when the rectangle is turned 45 degrees, you'll lose about half of your pixels in p_target, but your formula assumes w_target is constant, so you overestimate distance.
To account for this, you should estimate the angle the box is turned. I'm not sure of an easy way to extract that information out of getPerspectiveTransform, but it may be possible. You could set up a constrained optimization where the decision variables are distance and skew angle and enforce a metric distance between the left and right points on the box.
Finally, no matter what you are doing, you should make sure your camera is calibrated. Depending on your application, you might be able to use AprilTags to just solve your problem.

How do I remove air resistance?

I have a space simulation, so obviously I don't want gravity or air resistance. Gravity was straight forward to turn off, but I can't find the equivalent for air resistance. I presume it's going to be on a body-by-body basis rather than a world wide setting like gravity.
Indeed I see on btSoftBody that there are values for medium density like air_density but I am using btRigidBody.
There is no air resistance in bullet physics but there is damping
For every body you create you should set damping calling
body->setDamping(linear, angular);
set linear to be 0

opengl modelling rocket flame and vapour trails with particles

Does anyone have any guidance for coding an approximation for the particle stream coming out of a jet engine (with afterburner) in opengl using particles drawing using vertex buffers / 4f color buffers?
I believe there are two aspects to this problem:
The colour of the light as particles exit the jet engine as a function of temperature and some constants relating to the type of gas being burnt. This article leads me to believe I will need some sort of array for the temperature / color conversion curve. Apprently hydrogen burns at 2,660C in oxygen and 2,045C in air whereas jet fuel burns at 287.5C in air. (but the temperature of jet fighter afterburner can reach 1700C somehow)
The vapour trail behind the rocket / jet which will be either white with alpha for the water base vapour trail if the rocket is in atmosphere. Also I believe my assumption is correct that this would not be necessary for a rocket burning fuel in space. The vapour trail will simulated as tiny water droplets which are much larger than the wavelength of visible light, so they would scatter light achromatically. As water itself is colorless the resulting color would be white?
Also I am looking to model this from a birds eye perspective so it does not need to be a full 3D model. So the positions of the 10 or so pilot lights around the afterburner cone for example could just be approximated as maybe 5 linear points.
Depending on the level of detail you require, you may want to simple use a textured cone coming out of the yet engine. If you want to go for a full-blown particle system (which for a jet engine does not appear necessary to me) then you might want to give each particle on the stack a bunch of properties like speed (vec3), size, gas type and age.
Make a loop to process each particle each time your game loop goes around. For each tick, your simulation would then change the speed and size as the particle gets older. You should make a functions that determines the look of the particle according to its age and gas type.
At its simplest, this could make colored particles fade, enlarge and speed down as it gets older. Is this what you are looking for?