I want to make an particle effect for fallen leaf or little paper fallen from above, like it was autumn or on a wedding. So I think maybe I could use a texture for leaf or paper, let them fallen from sky, in the mean time, texture should flip X/Y randomly. But I can't find any property or method to let me do this.
So my question is, can I?
If I can't flip texture anyway, how can I make the effect?
The CCParticleSystem itself can't flip the textures randomly.
But since CCParticleSystem derives from CCNode you could try and set the scaleX/scaleY property to -1 to flip the entire particle system. That way you could run two particle systems at the same location, one flipped and the other unflipped.
Related
I have been following some OpenGL tutorials for an open world project i am currently working on where the goal is to have an Openworld Scene with several objects (mountains etc...) present and with a SkyBox where all the objects are placed inside it.
I would like to ask if there is any way of the camera freely moving inside the skybox, "interacting" with potential objects in it, but without actually getting out of the boundaries of the box. In the tutorials the translation of the camera is removed, so it can only look around without moving around.
Is it a common practice to actually move the camera inside the skybox, or should i somehow move the skybox along with the camera, thus never reaching the boundaries of the box?
Skybox is usually rendered without offset to camera because its content represent stuff very far away (many times bigger than actual camera movement) like stars or mountains that are many kilometers away. So even if you move like 100 m in any direction the rendered result is not changed at all (or very little that can not be recognized).
If your skybox contains stuff you want to move towards than is doable but you need to limit the movement so you not get too close as that would result in pixelation of the skybox and eventually even crossing it. That can be done by game terrain (you can not jump above boundary mountains or swim too far from an island etc.
Another option is to limit camera distance from skybox center to some safe distance. If more far then the limit move the skybox to match the distance again... that way you can come near/far to skybox up to a point (it gets bigger/smaller on the close/far side) and never cross it ... without any actual camera position restrictions...
First things first, when you are rendering a sky box, generally, you don't render an actual box.
The skybox contains stuff that generally never or only very slowly change and is so far away that the player will never reach. The skybox is stored in a cube map texture and rendered through a full screen rectangle. In the shader you use OpenGL's cubemap sampling by sampling with the eye vector into the map.
If the skybox is dynamic, like dynamic time of day, it is only re rendered every couple of frames or only when needed.
A while back I wrote an article on how to do it: GLSL Skybox (You will need to update the code to a modern OpenGL version through...)
I'm loading 3D objects (obj or 3ds or collada files) into my openGL application. The 3 environment is quite large (a few hundred metres in all axis').
My problem is that smaller 3D objects (i.e. in the order of ~< 1-2m ) don't appear to be depth-tested properly. Depending on the zoom of the camera, I can sometimes see the back face of the object (I have been using a simple cube for testing) or other faces becoming visible/invisible/torn. Please see the attached images for a better explanation.
I am led to believe the problem is due to mipmapping being enabled. I would either like to disable mipmapping (can someone suggest a simple, fast way to do this) or set the resolution to be greater for the mipmapped objects. Or am I barking up the wrong tree completely?
Thanks
Chris
That's the result of insufficient z-buffer precission, which is an issue in games that have huge worlds but (relatively) small objects. The immediate solution would be to try using a 24 bit z-buffer instead of a 16 bit one. Another way to tackle this would be to render the game world it two steps, first the big distant objects, then clearing the zbuffer and then drawing the closer objects.
This specific problem is called z-fighting by the way, here's a great resource on this issue: http://www.codermind.com/articles/Depth-buffer-tutorial.html
The take-away is the last paragraph of the article above:
the true issue is that you can't draw
both objects that are very far and
objects that are very near with the
same depth buffer equations. If you
want to draw very far objects then you
need to sacrifice your near view by
pushing it further. To avoid clipping
artifacts you can make your collision
envelope large enough so that your
clip plane will never intercept an
existing object within your frustum.
Or you can make object gradually
disappear with transparency as they
come near your clip plane.
If you want to keep near objects and
at the same time draw mountains (or
planets) in the far distance, then you
can cut your rendering in parts. First
drawing your far objects, then
clearing the depth buffer and
rendering the near objects with a
different z buffer.
Like Julio, I believe that this is a depth precision issues, not something related to mip-mapping. However, I suggest you start by adjusting your near and far clipping plane before changing anything else (You are probably already using a 24-bit depth buffer anyways, as that is the default on most drivers/cards). Particularly the near plane should be as far away as possible for your scene. Look for calls to glFrustum or gluPerspective.
I'm making a 3D FPS with OpenGL and here is the basics of how it works. The game is a 3D array of cubes. I know the location of the player's current cube, aswell as the camera x,y,z and I know the x, y, z rotation of the camera too. Right now I just make a square around the player and render this and then add distant fog. The problem though, is that I'm still rendering everything that the player is in back of. How could I selectively only render what the player sees, not render everything within an X radius as Iam doing now.
Thanks
You are talking about frustum culling, if i get you right. I suggest that you take a look at this tutorial. They provide nice demos and explain everything in detail.
This sounds like you need to look into culling concepts.
Are the cubes rooms of a maze through which the player navigates? If so, and assuming the rooms are static over the course of the game, you could use a BSP tree to traverse the scene in order of depth, stopping when you pass the player.
I am currently working on designing my first FPS game using JOGL. (Java bindings for OpenGL).
So far I have been able to generate the 'world' (a series of cubes), and a player model. I have the collision detection between the player and the cubes working great.
Now I am trying to add in the guns. I have the gun models drawn correctly and loading onto player model. The first gun I'm trying to implement is a laser gun, which shoots and instantaneous line-of-sight laser at whatever you're aiming at. Before I work on implementing the enemy models, I would like to get the collision detection between the laser and the walls working.
My laser, currently, is drawn by a series of small cubes, one after the other. The first cube is drawn at the end of the players gun, then it draws continuously from there. The idea was to continue drawing the cubes of the laser until a collision was detected with something, namely the cubes in the world.
I know the locations of the cubes in the world. The problem is that I have to call glMatrixPush to draw my character model. The laser is then drawn within this modelview. Meaning that I have lost my old coordinate system - so I'm drawing the world in one system, then the lazer in another. Within this player matrix, I have go call glRotate and glTranslate several times, in order to sync everything up with the way the camera is rotating. The lazer is then built by translating along the z-axis of this new system.
My problem is that through all of these transformations, I no longer have any idea where my laser exists in the map coordinate system, primarily due to the rotations involving the camera.
Does anyone know of a method - or have any ideas, for how to solve this problem? I believe I need a way to convert the new coordinates of the laser into the old coordinates of the map, but I'm not sure how to go about undoing all of the transformations that have been done to it. There may also be some functionality provided by OpenGL to handle this sort of problem that I'm just unaware of.
You shouldn't be considering the laser as a spacial child of the character that fires it. Once its been fired, the laser is an entity of its own, so you should render as follows:
glPushMatrix(viewMatrix);
glPushMatrix(playerMatrix);
DrawPlayer();
glPopMatrix();
glPushMatrix(laserMatrix);
DrawLaser();
glPopMatrix();
glPopMatrix();
Also, be sure that you don't mix your rendering transformation logic with the game logic. You should always store the world-space position of your objects to be able to test for intersections regardless of your current OpenGL matrix stack.
Remember to be careful with spacial parent/child relationships. In practice, they aren't that frequent. For more information, google about the problems of scene graphs.
The point that was being made in the first answer is that you should never depend on the matrix to position the object in the first place. You should be keeping track of the position and rotation of the laser before you even think about drawing it. Then you use the translate and rotate commands to put it where you know it should be.
You're trying to do things backwards, and yes, that does mean you'll have to do the matrix math, and OpenGL doesn't keep track of that because the ModelView matrix is the ONLY thing that OpenGL does keep track of in regards to object positions. OpenGL has no concept of "world space" or "camera space". There is only the matrix that all input is multiplied by. It's elegantly simple... but in some cases I do prefer the way DirectX has a a separate view matrix and model matrix.
So, if you don't know where an object is located without matrix math, then I would consider that a fundamental design problem. If you don't need to know the object position, then matrix-transform to your hearts content, but if you do need it's position, start with the position.
(pretty much what the first answer says, just in a different way...)
I'm working with Qt and QWt3D Plotting tools, and extending them to provide some 3-D and 2-D plotting functionality that I need, so I'm learning some OpenGL in the process.
I am currently able to plot points using OpenGL, but only as circles (or "squares" by turning anti-aliasing off). These points act the way I like - i.e. they don't change size as I zoom in, although their x/y/z locations move appropriately as I zoom, pan, etc.
What I'd like to be able to do is plot points using a myriad of shapes (^,<,>,*,., etc.). From what I understand of OpenGL (which isn't very much) this is not trivial to accomplish because OpenGL treats everything as a "real" 3-D object, so zooming in on any openGL shape but a "point" changes the object's projected size.
After doing some reading, I think there are (at least) 2 possible solutions to this problem:
Use OpenGL textures. This doesn't seem to difficult, but I believe that the texture images will get larger and smaller as I zoom in - is that correct?
Use OpenGL polygons, lines, etc. and draw *'s, triangles, or whatever. But here again I run into the same problem - how do I prevent OpenGL from re-sizing the "points" as I zoom?
Is the solution to simply bite the bullet and re-draw the whole data set each time the user zooms or pans to make sure that the points stay the same size? Is there some way to just tell openGL to not re-calculate an object's size?
Sorry if this is in the OpenGL doc somewhere - I could not find it.
What you want is called a "point sprite." OpenGL1.4 supports these through the ARB_point_sprite extension.
Try this tutorial
http://www.ploksoftware.org/ExNihilo/pages/Tutorialpointsprite.htm
and see if it's what you're looking for.
The scene is re-drawn every time the user zooms or pans, anyway, so you might as well re-calculate the size.
You suggested using a textured poly, or using polygons directly, which sound like good ideas to me. It sounds like you want the plot points to remain in the correct position in the graph as the camera moves, but you want to prevent them from changing size when the user zooms. To do this, just resize the plot point polygons so the ratio between the polygon's size and the distance to the camera remains constant. If you've got a lot of plot points, computing the distance to the camera might get expensive because of the square-root involved, but a lookup table would probably solve that.
In addition to resizing, you'll want to keep the plot points facing the camera, so billboarding is your solution, there.
An alternative is to project each of the 3D plot point locations to find out their 2D screen coordinates. Then simply render the polygons at those screen coordinates. No re-scaling necessary. However, gluProject is quite slow, so I'd be very surprised if this wasn't orders of magnitude slower than simply rescaling the plot point polygons like I first suggested.
Good luck!
There's no easy way to do what you want to do. You'll have to dynamically resize the primitives you're drawing depending on the camera's current zoom. You can use a technique known as billboarding to make sure that your objects always face the camera.