I have attached a camera object to a mobile object (a car) at a scene. The camera shows the area which the object is at and looking to (front window of the car).
My problem is that, while my object (with camera) moves or rotates, the objects at the screen (street light sticks, other cars moving parallel) seem shaking. Namely I see the solid object and some little transparent versions of the same object near to the original. It is more observable when the camera changes its orientation than when it moves straight. When moving straight, the lights shake when they are far from the observer, at some range they stop shaking, and when they get nearer they start shaking again.
I do not think that OpenGL makes a motion blur at the background by itself. But I also could not find a name to this problem, so I am not able to find a starting point.
When the program is in full screen mode, and the time required to draw the full scene is below 16msec (for 60fps), then I could be able to have smooth movement with no objects oscillating.
When the draw time oscillates too much or when I set Vega to limit the fps algorithmically, the scene oscillates.
When I set the swap interval to 1, namely letting the underlying draw thread to match with screen refresh, then it is fine again.
Related
I have been following some OpenGL tutorials for an open world project i am currently working on where the goal is to have an Openworld Scene with several objects (mountains etc...) present and with a SkyBox where all the objects are placed inside it.
I would like to ask if there is any way of the camera freely moving inside the skybox, "interacting" with potential objects in it, but without actually getting out of the boundaries of the box. In the tutorials the translation of the camera is removed, so it can only look around without moving around.
Is it a common practice to actually move the camera inside the skybox, or should i somehow move the skybox along with the camera, thus never reaching the boundaries of the box?
Skybox is usually rendered without offset to camera because its content represent stuff very far away (many times bigger than actual camera movement) like stars or mountains that are many kilometers away. So even if you move like 100 m in any direction the rendered result is not changed at all (or very little that can not be recognized).
If your skybox contains stuff you want to move towards than is doable but you need to limit the movement so you not get too close as that would result in pixelation of the skybox and eventually even crossing it. That can be done by game terrain (you can not jump above boundary mountains or swim too far from an island etc.
Another option is to limit camera distance from skybox center to some safe distance. If more far then the limit move the skybox to match the distance again... that way you can come near/far to skybox up to a point (it gets bigger/smaller on the close/far side) and never cross it ... without any actual camera position restrictions...
First things first, when you are rendering a sky box, generally, you don't render an actual box.
The skybox contains stuff that generally never or only very slowly change and is so far away that the player will never reach. The skybox is stored in a cube map texture and rendered through a full screen rectangle. In the shader you use OpenGL's cubemap sampling by sampling with the eye vector into the map.
If the skybox is dynamic, like dynamic time of day, it is only re rendered every couple of frames or only when needed.
A while back I wrote an article on how to do it: GLSL Skybox (You will need to update the code to a modern OpenGL version through...)
I have some 64x64 sprites which work fine (no flicker/shuffling) when I move my camera normally. But as soon as I change the camera.zoom (was supposed to be a major mechanic in my game) level away from 1f the sprites flicker every time you move.
For example changing to 0.8f:
Left flicker:
One keypress later: (Right flicker)
So when you move around it's really distracting for gameplay when the map is flickering... (however slightly)
The zoom is a flat 0.8f and I'm currently using camera.translate to move around, I've tried casting to (int) and it still flickered... My Texture/sprite is using Nearest filtering.
I understand zooming may change/pixelate my tiles but why do they flicker?
Edit
For reference here is my tilesheet:
It's because of the nearest filtering. Depending on amount of zoom, certain lines of artwork pixels will straddle lines of screen pixels so they get drawn one pixel wider than other lines. As the camera moves, the rounding works out differently on each frame of animation so that different lines are drawn wider on each frame.
If you aren't going for a retro low-res aesthetic, you could use linear filtering with mip maps (MipMapLinearLinear or MipMapLinearNearest). Then start with larger resolution art. The first looks better if you are smoothly transitioning between zoom levels, with a possible performance impact.
Otherwise, you could round the camera's position to an exact multiple of the size of a pixel in world units. Then the same enlarged columns will always correspond with the same screen pixels, which would cut down on perceived flickering considerably. You said you were casting the camera translation to an int, but this requires the art to be scaled such that one pixel of art exactly corresponds with one pixel of screen.
This doesn't fix the other problem, that certain lines of pixels are drawn wider so they appear to have greater visual weight than similar nearby lines, as can be seen in both your screenshots. Maybe one way to get round that would be to do the zoom a secondary step, so you can control the appearance with an upscaling shader. Draw your scene to a frame buffer that is sized so one pixel of texture corresponds to one world pixel (with no zoom), and also lock your camera to integer locations. Then draw the frame buffer's contents to the screen, and do your zooming at this stage. Use a specialized upscaling shader for drawing the frame buffer texture to the screen to minimize blurriness and avoid nearest filtering artifacts. There are various shaders for this purpose that you can find by searching online. Many have been developed for use with emulators.
I'm rendering a particle system of vertices, which are then tessellated into quads in a geom shader, and textured/rendered as point sprites. Then they are scaled in size depending on how far away they are from the camera. I'm trying to render out every frame of my scene into cube maps. So essentially I place six cameras into my scene and point them in each direction for the face of the cube and save an image.
My point sprites are of varying sizes. When they near the border of one camera, (if they are large enough) they appear in two cameras simultaneously. Since point sprites are always facing the camera, this means that they are not continuous along the seam when I wrap my cube map back into 3d space. This is especially noticeable when the points are quite close to the camera, as the points are larger, and stretch further across into both camera views. I'm also doing some alpha blending, so this may be contributing to the problem as well.
I don't think I can just cull points that near the edge of the camera, because when I put everything back into 3d I'd think there would be strange areas where the cloud is more sparsely populated. Another thought I had would be to blur the edges of each camera, but I think this too would give me a weird blurry zone when I go back to 3d space. I feel like I could manually edit the frames in photoshop so they look ok, but this would be kind of a pain since it's an animation at 30fps.
The image attached is a detail from the cube map. You can see the horizontal seam where the sprites are not lining up correctly, and a slightly less noticeable vertical one on the right side of the image. I'm sure that my camera settings are correct, because I've used this same camera setup in other scenes and my cubemaps look fine.
Anyone have ideas?
I'm doing this in openFrameworks / openGL fwiw.
Instead of facing the camera, make them face the origin of the cameras? Not sure if this fixes everything, but intuitively I'd say it should look close to OK. Maybe this is already what you do, I have no idea.
(I'd like for this to be a comment, but no reputation)
I have a bit of experience with Cocos2d but it's also been quite a while since I've worked with it. That being said, I don't necessarily need code handed to me - just a pointer towards the right approach I should be taking to achieve my requirements.
My project is a simple block game, where players move blocks by swiping them (the blocks move at the exact speed of the swipe, no acceleration). What I want to achieve, is when the player swipes a block off screen, I want the part of the sprite which is hidden off-screen to appear at another edge of the screen and keep moving until the drag motion has stopped (a bit like the old mobile game, Snake II). When the sprite has completely moved off the screen, it should now be completely visible somewhere on the opposing side of the screen. (so the screen is like an infinite loop which the sprite can move on). For example, the sprite is 40% visible on the left of the screen, and 60% visible on the right of the screen (halved at screen boundary, 0.x). As the sprite moves left, it will become 35% visible on the left, and 65% on the right.
What is the best approach to tackle this? Should I be duplicating the sprite and then moving the new copy onto the screen in an opposing fashion? Or is this somehow possible with one sprite and some kind of mask?
Any help would be greatly appreciated. (I'm not at home now but I can add sample code and images later if my explanation isn't clear)
I'm using the objective-c version of Cocos2d.
I've done this several times before, it's quite simple. Assuming that your sprite can move outside the screen in all directions, you need a total of 4 identical sprites. One sprite is the "master" sprite, ie the one sprite that is always visible when the sprite is not near any screen border. Let's call the other 3 the "slaves".
Every frame you check whether the master sprite is wholly contained within the screen. A simple CGRectContainsRect test.
If it's not contained, you make the three slave sprites visible and offset them by screen width, screen height, and screen width & height respectively. Assuming the master sprite is leaving the screen at the lower left corner. If it were leaving at the upper right corner, you would need to subtract screen width/height.
Now once the master has left the screen entirely, you need to offset its position once and hide the slave sprites once more. For example if the master left the screen to the right, you would have to subtract screen.width from its x position once.
So basically you just need to determine if master sprite is near any border, then offset the slave sprites according to which quadrant (ie lower left, upper right, etc) the master sprite is on, and then offset the master and disable the slaves once it has left the screen entirely.
Depending on your needs you may also need to extend collision checks to all four sprites, or you may decide that you want to offset the master sprite not when it left the screen entirely but when its position is no longer within the screen.
I have a rectangle on my window and I am trying to make this rectangle click-able by defining the area of the rectangle.
If the mouse click is inside this area, then it's a click else not.
For eg: On the window, let's assume the vertex of the rectangle is:
x = 40, y = 50; width = 200, height = 100;
So, a click will be counted when
(mouseXPos > getX()) && (mousxPos < (getX()+width)) && (mouseYPos > getY()) && (mouseYPos > getY()+height)
Now, I am doing lookAt transformation to the object by inheriting a class which has lookAt functions. Also, I am using a camera to check the different faces of the object (camera rotation). So, when the object rotates along various axes and shows different faces when the camera is used.
However, when the object moves, I would have thought the vertices of the rectangle would change. The vertices of the rectangle should also have changed on doing gluLookAt function but looks like they do not and my click-area always remains stationary at those points although the object is not there. How do I tackle this problem? How do I make my object clickable and add some mouse events on it?
If you're trying to click on 3D shapes, and you are moving the camera, I wouldn't check this in screen coordinates.
Instead, you can project the point where the user has clicked into a line in 3D space, and intersect that against the objects which can be clicked on.
GL has a function for this: gluUnproject()
If you give that function the details of your view, along with the screen point being clicked on, it can tell you where in 3D space the user has clicked. If you check a point at both the near and far planes, the line between these points can be checked against your object.
The other alternative, is some kind of ID buffer, where you render the scene off-screen, but instead of outputting shaded pixels, you output ID values for each object. The hit-test is then just a matter of reading a pixel from that buffer.
Seeing that nobody else is answering, I will give it a try. :)
Though not very experienced with opengl this seems like the expected behavior to me, and it seems like you are mixing up world coordinates with on screen coordinates.
Think of it this way, if you take a picture of a table in your living room from two different angles it will look differently, but will in both images occupy the same space in that room. The same can be said about entities when working with opengl, even though you move the camera the coordinates for the entity does not change, just the perception of it.
Now, if I'm not mistaken you can translate your world coordinates into on-screen coordinates by applying the same transformations as applied by opengl. I would suggest taking a look at this: OpenGL: projecting mouse click onto geometry
Thumbs up for the ID buffer alternative. I used it several times and it was really the right thing to do : no expensive geometry picking and a hardly-total disconnection with the underlying geometry.
Still, you can only pick the closest geometry (that is the one "visible" on the screen) and not the models that could be hidden behind. The same problem appears when dealing with translucent materials too.
One solution could be to do some ID-peeling by rendering four objects in a RGBA ID texture (I have a feeling it could be non-optimal but it's worth a shot).