I am feeling difficult in understanding box2d coordinate system vs pixels or points in cocos2d.I am using retina display too.
i tried with PTM_RATIO 32 and 30 also.
But,I think box2d is not linear with pixels.Could you suggest me how to sync them.
I need to design a game that need to use pixels exactly.
Thank you
PTM_RATIO stands for Pixel to Meter Ratio, so this number is just scaling what Box2D thinks is a meter to pixels. You don't want to have a 1-1 ratio because each pixel would be a meter high from the physics engine's stand point and that might make your game behave oddy.
I use a PTM_RATIO of 16 and that seems to work in a lot of cases, so give that a try.
Just make sure you convert from the internal Box2D coordinate to your screen coordinates using the PTM_RATIO multiplier when you draw/position sprites/graphics and everything should come out fine and be as close to pixel perfect as a physics engine can be.
Related
It doesn't have to be pixel-perfect collision, but I want it to be as close as possible to the actual pixels of the sprite. FYI, I created a 32 by 32 sprite but then I was only able to fill estimately half the amount of pixels, so the rest is just transparent.
Most games out there don't use anything close to pixel perfect collision and it's usually not needed. Having some approximated rectangle or a combination of multiple rectangles is usually enough.
SFML itself provides intersects() and contains() functions for it's sf::Rect<T> class.
There's also some collision detection class in the SFML wiki which also features a bit-mask collision, that's basically a pixel-perfect collision detection.
I'm doing camera calibration using the calibration.cpp sample provided in the OpenCV 3.4 release. I'm using a simple 9x6 chessboard, with square length = 3.45 mm.
Command to run the code:
Calib.exe -w=9 -h=6 -s=3.45 -o=camera.yml -oe imgList.xml
imgList.xml
I'm using a batch of 28 images available here
camera.yml (output)
Image outputs from drawChessboardCorners: here
There are 4 images without the chessboard overlay drawn, findChessboardCorners has failed for these.
Results look kind of strange (if I understand them correctly). I'm taking focal length value for granted, but the principal point seems way off at c = (834, 1513). I was expecting a point closer to the image center at (1280, 960) since the orientation of the camera to the surface viewed is very close to 90 degrees.
Also if I place an object at the principal point and move it in the Z axis I shouldn't see it move along x and y in the image, is this correct?
I suspect I should add images with greater tilt of the chessboard with respect to the camera to get better results (z-angle). But the camera has a really narrow depth of field, and this prevents the chessboard corners from being detected.
The main issue you have is you don't feed the camera software enough information to get the right estimation of different parameters.
In all the 28 images you changed only the orientation of the chessboard around the z axis in the same plane. You don't need to take that much photos, for me around 15 is okay. You need to add more ddl to your images: change the distance of the chessboard from the camera and tilt the chessboard around its X and Y axis. Re calibrate the camera and you should get the right parameters.
It really depends on the camera and lens you use.
More specifically on things like:
precision of chip deployment
attachment of screw thread of lens
manufacturing of lens itself
Some cheap webcam with small chip could even have the principal point out of the image size (means it could be also a negative number). So in your case C could be both - (834,1513) or (1513,834).
If you are using industrial cam or something similar, C should be in range of tens of percent around the centre of the image ->e.g. (1280,960)+-25%.
About the problem with narrow DOF (in nutshell) - to make it wider you need to get aperture as small as possible, prolong the exposure and add some extra light behind the camera to compensate the aperture.
Also you could refocus to get sharp shots from different distances, only your accuracy gets lower as refocusing is slightly changing the focal length. But in most cases you do not need this super extra ultra accuracy so this should not be the problem.
First of all, sorry for my bad English,
I have an object like following picture, the object always spin around a horizontal axis. Anybody can recommend me how to I can take a photo that's full label of tube when the tube is spinning ? I can take a image from my camera via OpenCV C++, but when I'm trying to spin the tube around, I can't take a perfect photo (my image is blurry, not clearly).
My tube is perfectly facing toward camera. Its rotating speed is about 500 RPM.
Hope to get your help soon,
Thank you very much!
this is my object:
Some sample images:
Here my image when I use camera of Ip5 with flash:
Motion blur
this can be improved by lowering the exposure time but you need to increase light conditions to compensate. Most modern compact cameras can not set the exposure time directly (so the companies can sold the expensive profi cameras) even if it is just few lines of GUI code but if you increase the light the automatic exposure should lower on its own.
In industry this problem is solved by special TDI cameras like
HAMAMATSU TDI Line Scan Cameras
The TDI means Time delay integration which means the camera CCD pixels are passing its charge to the next pixel synchronized with the motion. This results in effect like you would move the camera synchronously with your object surface. The blur is still present but much much smaller (only a fraction of real exposure time)
In computer vision and DIP you can de-blur the image by deconvolution process if you know the movement properties (which you know) It is inversion of gaussian blur filter with use of FFT and optimization process to find the inverse filter.
Out of focus blur
This is due the fact your surface is curved and camera chip is not. So outer pixels have different distance to chip then the center pixels. Without special optics you can handle this by Line cameras. Of coarse I do not expect you got one so you can use your camera for this too.
Just mount your camera so one of the camera axis is parallel to you object rotation axis (surface) for example x axis. Then sample more images with constant time step and use only the center line/slice of the image (height of the line/slice depends on your exposure time and the object speed, they should overlap a bit). then just combine these lines/slices from all the sampled images to form the focused image .
[Edit1] home made TDI setup
So mount camera so its view axis is perpendicular to surface.
Take burst shots or video with constant frame-rate
The shorter exposure time (higher frame-rate) the more focused whole image will be (due to optical blur) and the bigger area dy from motion blur. And the higher the rotation RPM the smaller the dy will be. So find the best option for your camera,RPM and lighting conditions (usually adding strong light helps if you do not have reflective surfaces on the tube).
For correct output you need to compromise each parameter so:
exposure time is as short as it can
focused areas are overlapping between the shots (if not you can sample more rounds similar to old FDD sector reading...)
extract focused part of shots
You need just the focused middle part of all shots so empirically take few shots from your setup and choose the dy size. Then use that as a constant latter. So extract the middle part (slice) from the shots. In my example image it is the red area.
combine slices
You just copy (or average overlapped part) the slices together. They should overlap a bit so you do not have holes in final image. As you can see my final image example has smaller slices then acquired to make that more obvious.
Your camera image can be off by few pixels due to vibrations so If that is a problem in final image then you can use SIFT/SURF + RANSAC for auto-stitching for higher precision output.
This question already has answers here:
How to render ocean wave using opengl in 3D? [closed]
(2 answers)
Closed 7 years ago.
I have absolutely no idea how to render water sources (ocean, lake, etc). It's like every tutorial I come across assumes I have the basic knowledge in this subject, and therefore speaks abstractly about the issue, but I don't.
My goal is to have a height based water level in my terrain.
I can't find any good article that will help me get started.
The question is quite broad. I'd split it up into separate components and get each working in turn. Hopefully this will help narrow down what those might be, unfortunately I can only offer the higher level discussion you aren't directly after.
The wave simulation (geometry and animation):
A procedural method will give a fixed height for a position and time based on some noise function.
A very basic idea is y = sin(x) + cos(z). Some more elaborate examples are in GPUGems.
Just like in the image, you can render geometry by creating a grid, sampling heights (y) at the grid x,y positions and connecting those points with triangles.
If you explicitly store all the heights in a 2D array, you can create some pretty decent looking waves and ripples. The idea here is to update height based on the neighbouring heights, using a few simple rules. For example, each height moves towards the average neighbouring height but also tends towards the equilibrium height equals zero. For this to work well, heights will need a velocity value to give the water momentum.
I found some examples of this kind of dynamic water here:
height_v[i][j] += ((height_west+ height_east + height_south + height_north)/4 - height[i][j]);
height_v[i][j] *= damping;
height[i][j] += height_v[i][j];
Rendering:
Using alpha transparency is a great first step for water. I'd start here until your simulation is running OK. The primary effect you'll want is reflection, so I'll just cover that. Further on you'll want to scale the reflection value using the Fresnel ratio. You may want an absorption effect (like fog) underwater based on distance (see Beer's law, essentially exp(-distance * density)). Getting really fancy, you might want to render the underneath parts of the water with refraction. But back to reflections...
Probably the simplest way to render a planar reflection is stencil reflections, where you'd draw the scene from underneath the water and use the stencil buffer to only affect pixels where you've previously drawn water.
An example is here.
However, this method doesn't work when you have a bumpy surface and the reflection rays are perturbed.
Rather than render the underwater reflection view directly to the screen, you can render it to a texture. Then you have the colour information for the reflection when you render the water. The tricky part is working out where in the texture to sample after calculating the reflection vector.
An example is here.
This uses textures but just for a perfectly planar reflection.
See also: How do I draw a mirror mirroring something in OpenGL?
Question on how this could be done, if possible.
I have sprites in each of the following directions (up, down, left, right, upright, upleft, downright, and downleft). I am making a similar game to the old school zelda, running around a tile map (using tiled editor). This is working well, until now, I want to be able to shoot arrows/spells at any location on the map. I can do so, but the graphics look horrible because my guy will only turn each 45 degrees.
I have corrected this so I can only shoot in the direction my guy is facing, but now I can't hit them if they are not at a 45 degree angle from me. To fix this, I need to have a sprite at every 1 degree, or somehow combine the images say at 0 degrees (up) and 45 degrees (upright) to be able to get say 10 degrees via interpolation. Is this possible? Any ideas on how to do this?
I am looking into working with key animations since I wouldn't have to have so many sprites and use much less video memory (and smoother animations), but I still run into this same problem. Would like to know if this is conceptually possible and if so, a little psuedo code or snipit would be much appreciated.
One other question, if this is possible, do I need to be rendering this via openGL in 3D? Didn't really know if 3d would help in a 2d (orthogonal tile) game, but it might help spells falling look like they are falling downward more than moving across tiles from above to below?