I have been (trying to) porting a bullet engine over to another game code for some time now. One of my hiccups is that fact that the character isn't firing the bullet, despite my best efforts. I have a Bullet class and a BulletCache class.
One of my big question relates to the use of the method:
-(id) initWithBulletImage
{
// Uses the Texture Atlas now.
if ((self = [super initWithSpriteFrameName:#"bullet1big e0000.png"]))
{
}
return self;
}
This method is supposed to use the Zwoptex texture pack: however, my bullet texture pack features 8 bullets facing in 8 different directions, and it is its own texture pack; the game came pre-loaded with other item Packs and they are all in the same texture; mine isn't. I have bullet1big.plist file as well, but I am stumped where I need to go from here. Do I need to instead call a CCAnimation method to call the texture of bullets (so that it knows which bullet to fire in each direction?)
Related
I am making a basic implementation of Asteroids using SFML in C++, to practice using a component-entity-system framework.
Conceptually, it makes sense for objects like the player ship, floating asteroids etc. to share common 'components', such as a graphics component, a velocity component, and an orientation/positional component. This keeps concerns separate and has a whole range of benefits.
However, in SFML, Sprites are rendered to a fixed position that only they know about! This immediately means that my graphics component and orientation/positional component must be combined or must know about each other, which goes against the whole idea of the component-entity-system approach. In SDL, on the other hand, you can easily render the texture to a separate rectangle constructed from anywhere.
My question is this: There must be some concrete reasoning behind why Sprites in SFML hang onto their own positional information - what is this reasoning? Perhaps if I understood this better, I could form a good solution.
The sf::Sprite class is basically meant to be a quick way to draw sprites in an easy to use manner.
They're not necessarily the best for more advanced use cases, mostly because they're rather slow (since they're unbatched).
sf::Sprite is primarily geared to someone wanting to get a sprite on screen easily without worrying too much about implementation details (as are other sf::Drawable derived classes).
What you should do instead is implementing your own drawable or visual component that stores color, texture, and UV coordinate. Mabye something like this:
struct DrawableComponent {
sf::Color color;
sf::Texture *texture;
sf::IntRect uv;
}
Of course there could be other approaches with more options or various components (e.g. vector graphics vs. textured quads).
Then, when drawing, iterate over all your entities with the same texture that can be batched and put their vertices into a std::vector or sf::VertexArray and use those for quick, batched rendering.
SFML follows an object-oriented design. A sf::Sprite models a visible thing, which has a texture and a transformation. Thus, as usual in OOD, it holds both of these attributes.
This is directly at odds with ECS design, which strives to turn this inside-out by not having entities hold onto anything. You won't really be able to integrate the sf::Sprite class into your design -- it's fundamentally incompatible. The best you can do is create a temporary sf::Sprite at display time, when you have gathered all of the data you need.
As for SDL... Well, unlike SFML, it's just a low-ish-level graphics API (among others). It does not try to model anything: take a texture, slap it on the framebuffer, that's it. Two very different tools for very different goals.
The program freezes when I call glBindTexture(GL_TEXTURE_2D, _ID); in the draw(Sprite) method of the 'Renderer' class. (the actual code is sprite.getTexture()->bind(), but I have added a std::cout in that function before and after the glBindTexture() call and it only prints once).
I am struggling to understand why the program is freezing - when I call glBindTexture, the program doesn't respond and crashes. Rather than filing this page up with a long list of code, heres the link to the github: https://github.com/TheInfernalcow/OpenGL-game, the files which are relevant are mainly the src/graphics/renderer.cpp and src/graphics/texture.cpp.
If anyone has the time to read through the code and try and point me in the right direction I would be grateful, have been pondering over this for hours.
Is it Splashstate that is having issue? If so -
SplashState::SplashState(Game* game)
{
_game = game;
Texture2D backgroundTexture("res/darkguy.png", 96, 128);
Sprite _background;
_background.setTexture(backgroundTexture);
}
You are assigning a texture to the locally scoped Sprite, not the one on the SplashState -- so when you try to draw it in your render function, the class level Sprite has no texture.
I developed a game with SDL2.00 and c++. I seemed to be having memory and CPU issues. CPU usage goes up to 40% and memory usage goes up by 5mg a second.
So I think the reason is the way Im handling Textures\Sprites.
My question is should I create a different/new sprite/texture for every instance ?
For example, I have a class called Enemy which contains all the variable and methods related to enemy monsters such as HP, damage, location , image(Texture) etc. This class contains its own texture to be rendered onto the renderer.
Is this the right way? Or should I create a Sprite/Texture for all the images before hand and render them as needed?
and I'm wondering if this will render two different images onto the renderer:
RenderCopy(renderer, image);
image->SetPosition(X,Y);
RenderCopy(renderer,image);
or is it going to move the sprite to the new position?
I think my issues are caused my overloading the renderer and/or having too many textures being loaded.
Let me know what you think .
OpenGL is not a scene graph. It's a drawing API. So everytime you're allocating a new texture you're creating a new data object that consumes memory. You should reuse and share resources where possible.
My question is should I create a different/new sprite/texture for
every instance ?
No, not unless every instance of the sprite uses different textures, otherwise you should reuse them, ie only load them once and store a pointer to the texture the sprite uses.
and I'm wondering if this will render two different images onto the
renderer:
RenderCopy(renderer, image);
image->SetPosition(X,Y);
RenderCopy(renderer,image);
or is it going to move the sprite to the new position?
This will copy the image twice. First at the old position and then at the new. What you will get is a trailing image. You should probably either clear the previous position or render the background at that position (or whatever should be there).
I have a cocos2d iOS app with Box2D (and Kobold2D); i have an array of 18 CCSprites in a layer. They are now created using spriteWithSpriteFrameName and a textureAtlas (thank you texturePacker). When i want to update the 18 sprites, i think i can either a) change the image (but i am not how to do that -- i saw a reference to setDisplayFrame, but i need to get the image from the batch node / texture atlas using spriteWithSPriteFrameName) or b) destroy the sprite i previously created and added to the layer with addChild and create a new one it it's place (18 sprites, 16 times in one "game"). In terms of resource usage and performance, which method is preferred? Seems like a), but again, not sure how to do that.
thanks
You could add the following code as extension to CCSprite:
-(void) setDisplayFrameNamed:(NSString*)name
{
[self setDisplayFrame:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:name]];
}
If you are using box2d you could also use GBox2D
which is described in detail in the MonkeyJump Tutorial
The ideal solution is not to switch textures of a sprite at all if you can avoid it. Second best is changing texture via Spriteframe (note that this rules out the use of a CCSpriteBatchNode).
Creating new sprites is typically the operation which has the highest negative performance impact.
I want to use OpenGL as graphics of my project. But I really want to do it in a good style. How can I declare a member function "draw()" for each class to call it within OpenGL display function?
For example, I want something like this:
class Triangle
{
public:
void draw()
{
glBegin(GL_TRIANGLES);
...
glEnd();
}
};
Well, it also depends on how much time you have and what is required. Your approach is not bad, a little old-fashioned, though. Modern OpenGL uses shaders, but I this is not covered by your (school?) project, I guess. For that purpose, and for starters, your approach should be completely OK.
Besides shaders, if you wanted to progress a little further, you could also go in the direction of using more generic polygon objects, simply storing a list of vertices and combine that with a separate 'Renderer' class that would be capable of rendering polygons, consisting of triangles. The code would look like this:
renderer.draw(triangle);
Of course, a polygon can have some additional attributes like color, texture, transparency, etc. You can have some more specific polygon classes like TriangleStrip, TriangleFan, etc., also. Then all you need to do is to write a generic draw() method in your OpenGL Renderer that will be able to set all the states and push the vertices to rendering pipeline.
When I was working on my PhD, I wrote a simulator which did what you wanted to do. Just remember that even though your code may look object oriented, the OpenGL engine still renders things sequentially. Also, the sequential nature of matrix algrebra, which is under the hood in OpenGL, is sometimes not in the same order as you would logically think (when do I translate, when do I draw, when do I rotate, etc.?).
Remember LOGO back in the old days? It had a turtle, which was a pen, and you moved the turtle around and it would draw lines. If the pen was down, it drew, if the pen was up, it did not. That is how my mindset was when I worked on this program. I would start the "turtle" at a familiar coordinate (0, 0, 0), and use the math to translate it (move it around to the center of the object I want to draw), then call the draw() methods you are trying to write to draw my shape based on the relative coordinate system where the "turtle" is, not from absolute (0, 0, 0). Then I would move, the turtle, draw, etc. Hope that helps...
No, it won't work like this. The problem is that the GLUT Display function is exactly one function. So if you wanted to draw a bunch of triangles, you could still only register one of their draw() functions to be the GLUT display function. (Besides, pointers to member functions in C++ are a hard topic as well).
So as suggested above, go for a dedicated Renderer class. This class would know about all drawable objects in your application.
class Renderer {
std::list<Drawable> _objects;
public:
drawAllObjects() {
// iterate through _objects and call the respective draw() functions
}
};
Your GLUT display function would then be a static function that calls drawAllObjects() on a (global or not) renderer object.
Ah, good old immediate-mode OpenGL. :) That routine there looks fine.
I would probably make the 'draw' method virtual, though, and inherit from a 'Drawable' base type that specifies the methods such classes have.