Getting wrong size of physics body in Retina device - cocos2d-iphone

I have made some physics body using Physics body. But i am getting 2x size body in retina devices. I am not getting why this is happening.Can any one guide me in this? I am attaching images for ipad and ipad-retina devices.
Image from ipad-nonretina device. In non-retina physics body is working fine.
Image from ipad-retina.Physics texture is coming in 2x size.
I made physics shape in physicEditor and then i used GB2ShapeCache to set fixture.
**
[[GB2ShapeCache sharedShapeCache]
addFixturesToBody:tireBody2 forShapeName:#"Tire"];
**

Related

Feed GStreamer sink into OpenPose

I have a custom USB camera with a custom driver on a custom board Nvidia Jetson TX2 that is not detected through openpose examples. I access the data using GStreamer custom source. I currently pull frames into a CV mat, color convert them and feed into OpenPose on a per picture basis, it works fine but 30 - 40% slower than a comparable video stream from a plug and play camera. I would like to explore things like tracking that is available for streams since Im trying to maximize the fps. I believe the stream feed is superior due to better (continuous) use of the GPU.
In particular the speedup would come at confidence expense and would be addressed later. 1 frame goes through pose estimation and 3 - 4 subsequent frames are just tracking the object with decreasing confidence levels. I tried that on a plug and play camera and openpose example and the results were somewhat satisfactory.
The point where I stumbled is that I can put the video stream into CV VideoCapture but I do not know, however, how to provide the CV video capture to OpenPose for processing.
If there is a better way to do it, I am happy to try different things but the bottom line is that the custom camera stays (I know ;/). Solutions to the issue described or different ideas are welcome.
Things I already tried:
Lower resolution of the camera (the camera crops below certain res instead of binning so cant really go below 1920x1080, its a 40+ MegaPixel video camera by the way)
use CUDA to shrink the image before feeding it to OpenPose (the shrink + pose estimation time was virtually equivalent to the pose estimation on the original image)
since the camera view is static, check for changes between frames, crop the image down to the area that changed and run pose estimation on that section (10% speedup, high risk of missing something)

Why does a full screen window resolution in OpenCV (# Banana Pi, Raspbian) slow down the camera footage and let it lag?

Currently I’m working on a project to mirror a camera for a blind spot.
The camera got 640 x 480 NTSC signal.
The output screen is 854 x 480 NTSC.
I grab the camera with an EasyCAP video grabber.
On the Banana Pi I installed open cv 2.4.9.
The critical point of this project is that the video on the display needs to be real time.
Whenever I comment the line that puts the window into fullscreen, there pop ups a small window and the footage runs without delay and lagg.
But when I set the video to full screen, the footage becomes slow, and lags.
Part of the code:
namedWindow("window",0);
setWindowProperty("window",CV_WND_PROP_FULLSCREEN,CV_WINDOW_FULLSCREEN);
while(1){
cap>>image;
flip(image, destination,1);
imshow("window",destination);
waitKey(33); //delay 33 ms
}
How can I fill the screen with the camera footage without losing speed and frames?
Is it possible to output the footage directly to the composite output?
The problem is that upscaling and drawing is done in software here. The Banana Pi processor is not powerful enough to process the needed throughput with 30 frames per second.
This is an educated guess on my side, as even desktop systems can run into lag problems when processing and simultaneously displaying video.
A common solution in the computer vision community for this problem is to use OpenGL for display. Here, the upscaling and display is offloaded to the graphics processor. You can do the same thing on a Banana Pi.
If you compiled OpenCV with OpenGL support, you can try it like this:
namedWindow("window", WINDOW_OPENGL);
imshow("window", destination);
Note that if you use OpenGL, you can also save on the flip operation by using an approprate modelview matrix. For this however you probably need to dive into GL code yourself instead of using imshow.
I fixed the whole problem by using:
namedWindow("window",1);
With FLAG 1 stands for WINDOW_AUTOSIZE.
The footage is more real-time now.
I’m using a small monitor, so the window size is nearly the same as the monitor.

Optimize memory usage by cocos2d App : How to manage large package in cocos2d?

I have the problem with download a large package content in cocos2d when i bought a category.
It's 100 MB. I must be finish download to use this category.
It includes: Spritesheet, background, sound object. Each category has 20 objects.
Does anyone suggest for me an idea ? I don't want that the customer is taking up so much of their time.
For iPhone or Mac?
Check This: Cocos2d memory optimisation
Here is some of my suggestions that we did in one of our cocos2d iPhone game.
Used Jpeg2000 image formate for background image : bit increased loading time..but reduced app size.
Use Audio with bitrate 128 Kbps
Use Texture Packer to create spriteSheet and make sure you set NPOT option..by default its POT. See image for more information.

Strange scaling behavior

I have my own class that holds a sprite.
The sprite is an animation made with Zwoptex. I got both retina and standard images alright.
I put my class in middle of the scene. And, for some reason, the sprite displays with a really small size.
I thought it might be because of scaling (although I never scale the sprite). So I decided to put two NSLogs:
NSLog(#"%f",enemy2.scale);
NSLog(#"%f",enemy2.sprite.scale);
One tells me the scale of my custom class itself and the other the scale of the sprite itself.
However, when I put those two lines of code, the sprite appears with the expected size (bigger).
And the NSLog result is 1.0.
Why? Any ideas?
Unless the scale getter method has been implemented and changes the scale_ instance variable this should not happen.
Initially I would say that the sprites being small size seems to indicate that you are loading the SD images on a Retina device, instead of the HD images. Be sure to name your SD and HD assets like this if you're using a texture atlas:
// SD images
textureatlas.png
textureatlas.plist
--> player.png
--> enemy.png
// HD images
textureatlas-hd.png
textureatlas-hd.plist
--> player.png // no -HD suffix!
--> enemy.png // no -HD suffix!
It is a common mistake to also suffix the sprite frames inside the texture atlas. So if your sprite frames in the HD texture atlas are named player-hd.png and enemy-hd.png then Cocos2D will not find them and reverts to loading the SD images.
It should be noted that TexturePacker will detect and can automatically correct this for you. Last time I used Zwoptex, it would allow you to create such incorrect texture atlases.

Combining Direct3D, Axis to make multiple IP camera GUI

Right now, what I'm trying to do is to make a new GUI, essentially a software using directX (more exact, direct3D), that display streaming images from Axis IP cameras.
For the time being I figured that the flow for the entire program would be like this:
1. Get the Axis program to get streaming images
2. Pass the images to the Direct3D program.
3. Display the program, on the screen.
Currently I have made a somewhat basic Direct3D app that loads and display video frames from avi videos(for testing). I dunno how to load images directly from videos using DirectX, so I used OpenCV to save frames from the video and have DX upload them up. Very slow.
Right now I have some unclear things:
1. How to Get an Axis program that works in C++ (gonna look up examples later, prolly no big deal)
2. How to upload images directly from the Axis IP camera program.
So guys, do you have any recommendations or suggestions on how to make my program work more efficiently? Anything just let me know.
Well you may find it faster to use directshow and add a custom renderer at the far end that, directly, copies the decompressed video data directly to a Direct3D texture.
Its well worth double buffering that texture. ie have texture 0 displaying and texture 1 being uploaded too and then swap the 2 over when a new frame is available (ie display texture 1 while uploading to texture 0).
This way you can de-couple the video frame rate from the rendering frame rate which makes dropped frames a little easier to handle.
I use in-place update of Direct3D textures (using IDirect3DTexture9::LockRect) and it works very fast. What part of your program works slow?
For capture images from Axis cams you may use iPSi c++ library: http://sourceforge.net/projects/ipsi/
It can be used for capturing images and control camera zoom and rotation (if available).