I wrote a program which should render a cone to the screen. The problem is that in the first frame I can see the cone displaying correctly on my iOS simulate screen; after that, the cone disappears. When I set a break point, all the data seems to be OK to me.
I am using OpenGL ES 1.0.
It is true that you should show us some code... But for all it's worth, I had the same problem and in my case the error was that I was calling theglEnableClientState(...)at the initialisation stage while theglDisableClientState(...)at every frame (everyRender() call).
Related
Currently I’m working on a project to mirror a camera for a blind spot.
The camera got 640 x 480 NTSC signal.
The output screen is 854 x 480 NTSC.
I grab the camera with an EasyCAP video grabber.
On the Banana Pi I installed open cv 2.4.9.
The critical point of this project is that the video on the display needs to be real time.
Whenever I comment the line that puts the window into fullscreen, there pop ups a small window and the footage runs without delay and lagg.
But when I set the video to full screen, the footage becomes slow, and lags.
Part of the code:
namedWindow("window",0);
setWindowProperty("window",CV_WND_PROP_FULLSCREEN,CV_WINDOW_FULLSCREEN);
while(1){
cap>>image;
flip(image, destination,1);
imshow("window",destination);
waitKey(33); //delay 33 ms
}
How can I fill the screen with the camera footage without losing speed and frames?
Is it possible to output the footage directly to the composite output?
The problem is that upscaling and drawing is done in software here. The Banana Pi processor is not powerful enough to process the needed throughput with 30 frames per second.
This is an educated guess on my side, as even desktop systems can run into lag problems when processing and simultaneously displaying video.
A common solution in the computer vision community for this problem is to use OpenGL for display. Here, the upscaling and display is offloaded to the graphics processor. You can do the same thing on a Banana Pi.
If you compiled OpenCV with OpenGL support, you can try it like this:
namedWindow("window", WINDOW_OPENGL);
imshow("window", destination);
Note that if you use OpenGL, you can also save on the flip operation by using an approprate modelview matrix. For this however you probably need to dive into GL code yourself instead of using imshow.
I fixed the whole problem by using:
namedWindow("window",1);
With FLAG 1 stands for WINDOW_AUTOSIZE.
The footage is more real-time now.
I’m using a small monitor, so the window size is nearly the same as the monitor.
I'm playing around with openGL (3.3 on OSX Mavericks), and I'm getting random parts of my screen being rendered to my window. I'm assuming that's probably clear evidence that I'm doing SOMETHING wrong... but what? Is it something with uninitialized values in a buffer? Am I using a buffer I didn't create? Some weird memory management thing? Or something like that?
Sorry if the question is a bit vague- I'm just betting that this is one of those bugs that openGL vets will hear and go "Of course! That means that {insert thing I'm doing wrong}".
Here's a screen shot to get an idea for what I'm talking about:
the black circle is what I'm attempting to render, the upside-down google logo is what I don't understand. Also, every time I run it I get different random textures.
Thanks! And I'd be happy to supply more details, I just don't know what other relevant info to include...
Thanks to #Andon M. Coleman (in the comments above), I've realized that this was simply a result of me not properly clearing the color buffer.
Specifically, my pipeline involved me rendering to a texture, and then blitting that texture to the screen. I WAS correctly clearing the SCREENS color buffer, but I never cleared the intermediate FB's color buffer.
The problem is, Cocos seems to think retina devices are the same resolution as standard devices. If you draw a sprite at 768,1024 on the ipad, it will put it at the upper right corner. If you draw a sprite at 768,1024 on the retina ipad, content scaling makes it so it also puts it in the upper right corner. Yet if you draw a sprite on the retina iphone 5 at 640,1136 it isn't in the upper right corner, its off the screen by 2x the distance. In order to put it in the corner you have to draw it at 320,568 because of the content scaling.
I do have a Default-568h#2x.png image, and I am on version 2.1.
My question is: Is there a way to get Cocos2d to make it so that drawing a sprite at 640,1136 on an iPhone5 will result in the sprite being in the upper right corner?
Is it possible to set up a cocos2d custom GL projection and set winSizeInPoints to equal winSizeInPixels and content scale 1.0 so that you can use the iPhone 4/5's full native resolution instead of using the half-size coordinates of older iPhones.
You can easily do this by changing the iPad suffix to use "-hd" via CCFileUtils. That way iPad devices will load the regular -hd assets for Retina iPhones.
Update regarding comments:
The positions on devices is measured in points, not pixels. So a non-retina iPad and a retina iPad both have a point resolution of 1024x768. This is great exactly because it makes adapting to screens with different pixel densities a no-brainer. This also works on the iPhone devices.
My suspicion is that you simply haven't added the Default-568h#2x.png launch image to your project yet, which may cause the widescreen devices to be treated differently.
And specifically on older cocos2d versions dated before iPhone 5 there's a bug that will treat the iPhone 5 as a non-Retina device, proper default image or not.
Yes You can share, easy way to do this is put all common sprite in sprite sheet and in runtime check if iPad, if yes then load iPhone HD sheet. We did this in many project and worked.
if(IS_IPAD)
{
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"GameSpriteSheet-hd.plist"];
}
else
{
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"GameSpriteSheet.plist"];
}
For background image, good to have separate iPad image, if scaling not look bad then you can scale image at runtime.
As far as I can understand what you are trying to do, the following approach should work:
call [director enableRetinaDisplay:NO] when setting up your CCDirector;
hack or override the following methods in CCDirector's so that winSizeInPixels is defines as the full screen resolution:
a. setContentScaleFactor:;
b. reshapeProjection:;
c. setView:;
Step 1 will make sure that no scaling factor is ever applied when rendering sprites or doing calculations; Step 2 will ensure that the full screen resolution is used whenever required (e.g., when defining the projection, but likely elsewhere as well).
About Step 2, you will notice that all listed methods show a statement like this:
winSizeInPixels_ = CGSizeMake( winSizeInPoints_.width * __ccContentScaleFactor, winSizeInPoints_.height * __ccContentScaleFactor );
__ccContentScaleFactor is made equal to 1 by step 1, and you should leave it like that; you could, e.g. customise winSizeInPixels calculation to your aim, like this:
if (<IPHONE_4INCHES>)
winSizeInPixels_ = CGSizeMake( winSizeInPoints_.width * 2, winSizeInPoints_.height * 2 );
else
winSizeInPixels_ = CGSizeMake( winSizeInPoints_.width * __ccContentScaleFactor, winSizeInPoints_.height * __ccContentScaleFactor );
Defining a custom projection would unfortunately not work because winSizeInPixels is always calculated based on __ccContentScaleFactor; but, __ccContentScaleFactor is also used everywhere in Cocos2D to position/size sprites and the likes.
A final note on implementation, you could hack this changes into the existing CCDirectorIOS class or you could derive from it your own MYCCDirectorIOS and override the methods there.
Hope it helps.
I need to create a game for university project in a short time, with sfml and c+. My question is that how fast the sfml2.0?
I mean if I recreate all the background, sprites at each step in the main loop, would it cause a low fps rate? So, can I just make a bitmap with all the elements on, and draw it?
Or I must have repaint only the changes, pixel by pixel and then refresh the screen.
The reason I am asking, cause later in the past, when I choose the first version (truthfully redraw everything pixel by pixel) with sdl, it was very slow, and the second option described above seems to require more work, and I only have like 3 day to do it.
I hope you understand the topic that confused me, and you can give me a nice advise how to do it.
In each frame you have to :
1) clear the screen
2) draw your sprites
3) display the screen
Obviously, you dont have to recreate all sprites at each step.
I'd like to draw the inside of a box fullscreen (i.e. it should completely fill the viewport) using OpenGL. The box should have perspective.
I presume I'll have to change the dimensions of the box depending on the viewport size but I'm not sure how to go about this.
I'm trying to achieve something like the room in this image
My question is: how can I achieve this?
Use the same coords for the fronts of the four "wall" quads as you passed to glFrustum. The usual viewport code will work just fine without modification (it's basically just telling OpenGL where to display its output, which you (nearly) always want as the full size of the window you're given). Just be aware that since you're told it to fill the view, you'll get linear distortion when/if the shape of the display area changes (i.e., square window->square box, oblong window->oblong box).
it's a bit late after almost one year but seen that there is not a complete answear or at least i couldn't solve the problem with what is here i'll point you to this question wich can help for sure
How to ensure that a plane perfectly fills the viewport in OpenGL ES