Override ccsprite draw function cocos2d-x - c++

I have been through all the custom draw questions related to cocos2d-x on stack overflow and other places, I would just like to know if it's possible to override the draw function in CCSprite, not in CCLayer. I want to draw a rounded rectangle. I am used to controlling what is being drawn on screen using spriteBatches in libgdx and XNA (R.I.P :( ). I am a bit lost as to how i can do custom draw for my sprite objects. Below is what I have and it does not work.
// object.h
private:
std::string _colourLabel;
int width;
int height;
cocos2d::Vec2 position;
public:
PLOPPP(std::string colour, cocos2d::Vec2 pos, int width, int height);
virtual void draw(cocos2d::Renderer *renderer, const cocos2d::kmMat4 &transform, bool transformUpdated);
void onDraw(const Mat4 &transform, uint32_t flags);
//object.cpp
void PLOPPP::draw(cocos2d::Renderer* renderer, const cocos2d::kmMat4& transform, bool transformUpdated)
{
cocos2d::ccDrawColor4F(1.0f, 0.0f, 0.0f, 1.0f);
cocos2d::ccDrawLine(ccp(0, 0), ccp(100, 100));
}
void PLOPPP::onDraw(const Mat4 &transform, uint32_t flags)
{
cocos2d::ccDrawColor4F(1.0f, 0.0f, 0.0f, 1.0f);
cocos2d::ccDrawLine(ccp(0, 0), ccp(100, 100));
}
I have also put break points in both the methods and it's never called. I also tried to use the sprite->visit() method to direct the render component into the onDraw method as recommended on other questions. Cocos's has a horrible documentation so I use their test projects and read the header files. I see that there is a draw method to override but it's not hitting. I am not very experienced in C++ but I have a good grip on Programming concepts and writing games in general. So you can use any terms you would use to explain to other programmers.
The reason why I want to use CCsprite is to get the benefits of the actions. I also tried to use the CCNode class as a base but it still does not hit. I don't want to make every object a layer because that would go completely against the concept of having objects in a game drawn on a layer. If anyone can suggest any way I can draw custom shapes as sprites, that is also affected by the actions applied to it. It would be greatly appreciated.
If you think I could benefit from other posts please point to it. This should not be a difficult process but it seems to be taking up a lot of my time. If I could just draw a line I'll be able to fly through the rest of the challenges. I just want to draw a custom something.

Related

Scrolling in SDL2, ie, changing integral coordinates of the giu's layout

I'm trying to simulate 'scrolling' in an application in SDL2, however i dont think that moving each individual object on the screen every time the scroll event occurs is an efficient/elegant way of doing it. What i know of SDL2 is the top left begins at 0,0 in coordinates. For me to make this much easier to implement, is it possible to change the top left starting point of the GUI so that, when i scroll, it moves to say, 0,100 and next scroll, 0,200 etc. How could I do this? Thanks
Rather than changing the x,y position of the object itself, or changing the reference co-ordinate of SDL (which cannot be done), you can instead create offset variables.
For example, create an SDL_Point called ViewPointOffset:
SDL_Point ViewPointOffset;
The best practice is to put this in your window class (if you have one), or even better, a Camera class that is a member of the window class.
Then, when you're drawing, just subtract the offset from the x and y co-ordinates that you're drawing:
void draw(SDL_Renderer* renderer, const SDL_Point ViewPointOffset, SDL_Texture* tex, const SDL_Rect* srcrect, const SDL_Rect* dstrect){
SDL_Rect* drawrect;
drawrect->w = dstrect->w;
drawrect->h = dstrect->h;
drawrect->x = dstrect->x - ViewPortOffset.x;
drawrect->y = dstrect->y - ViewPortOffset.y;
SDL_RenderCopy(renderer, tex, srcrect, drawrect);
}
You can either create a second function, or attach a boolean to the input of that function, to allow you to ignore the offset; what if you have a GUI button that you don't want the offset to apply to, etc?
https://github.com/Helliaca/SDL2-Game is a small open source game using a similar method. You can find this code in base.cpp/.h

Is it feasible to convert a QOpenGLWidget subclass into one that uses Metal instead of OpenGL?

Background: I've got a Qt/C++ application that currently runs on (and is deployed on) MacOS/X, Windows, and Linux. In one of the application's windows is a view of several dozen audio meters that needs to update frequently (i.e. at 20Hz or faster), so I implemented that view using a QOpenGLWidget and some simple OpenGL routines (see example code below).
This all works fine, however Apple has recently deprecated OpenGL and wants all developers to convert their applications over to Apple's own "Metal" API instead; with the implication that eventually any program that uses OpenGL will stop working on MacOS/X.
I don't mind doing a little #ifdef-magic inside my code to support a separate API for MacOS/X, if I must, however it's not clear if coding to Metal is something that can actually be done in Qt currently. If Metal-inside-Qt is possible, what is the proper approach to use? If not, should I wait for a future release of Qt with better Metal support (e.g. Qt 5.12?) rather than waste my time trying to make Metal work in my current Qt version (5.11.2)?
// OpenGL meters view implementation (simplified for readability)
class GLMetersCanvas : public QOpenGLWidget
{
public:
GLMetersCanvas( [...] );
virtual void initializeGL()
{
glDisable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glDisable(GL_COLOR_MATERIAL);
glDisable(GL_LIGHTING);
glClearColor(0, 0, 0, 0);
}
virtual void resizeGL(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, w, 0, h, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
virtual void paintGL()
{
const float meterWidth = [...];
const float top = [...];
glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_QUADS);
float x = 0.0f;
for (int i=0; i<numMeters; i++)
{
const float y = _meterHeight[i];
glColor3f(_meterColorRed[i], _meterColorGreen[i], _meterColorBlue[i]);
glVertex2f(x, top);
glVertex2f(x+meterWidth, top);
glVertex2f(x+meterWidth, y);
glVertex2f(x, y);
x += meterWidth;
}
glEnd();
}
};
Yes, it is possible to do what you want. It probably won't be a straightforward transition due to the fact that the code you posted uses very old deprecated features of OpenGL. Also, you might be better off just using CoreGraphics for the simple drawing you're doing. (It looks like a number of solid-colored, quads are being drawn. That's very easy and fairly efficient in CoreGraphics.) Metal seems like overkill for this job. That said, here are some ideas.
Metal is an inherently Objective-C API, so you will need to wrap the Metal code in some sort of wrapper. There are a number of ways you could write such a wrapper. You could make an Objective-C class that does your drawing and call it from your C++/Qt class. (You'll need to put your Qt class into a .mm file so the compiler treats it as Objective-C++ to call Objective-C code.) Or you could make your Qt class be an abstract class that has an implementation pointer to the class that does the real work. On Windows and Linux it could point to an object that does OpenGL drawing. On macOS it would point to your Objective-C++ class that uses Metal for drawing.
This example of mixing OpenGL and Metal might be informative for understanding how the 2 are similar and where they differ. Rather than having a context where you set state and make draw calls like in OpenGL, in Metal you create a command buffer with the drawing commands and then submit them to be drawn. Like with more modern OpenGL programming where you have vertex arrays and apply a vertex and fragment shader to every piece of geometry, in Metal you will also submit vertices and use a fragment and vertex shader for drawing.
To be honest, though, that sounds like a lot of work. (But it is certainly possible to do.) If you did it in CoreGraphics it would look something like this:
virtual void paintCG()
{
const float meterWidth = [...];
const float top = [...];
CGRect backgroundRect = CGRectMake(...);
CGContextClearRect(ctx, backgroundRect);
float x = 0.0f;
for (int i=0; i<numMeters; i++)
{
const float y = _meterHeight[i];
CGContextSetRGBFillColor(ctx, _meterColorRed[i], _meterColorGreen[i], _meterColorBlue[i]);
CGRect meterRect = CGRectMake(x, y, meterWidth, _meterHeight[i]);
CGContextFillRect(ctx, meterRect);
x += meterWidth;
}
glEnd();
}
It just requires that you have a CGContextRef, which I believe you can get from whatever window you're drawing into. If the window is an NSWindow, then you can call:
NSGraphicsContext* nsContext = [window graphicsContext];
CGContextRef ctx = nsContext.CGContext;
This seems easier to write and maintain than using Metal in this case.

SDL2 Object Oriented Wrapper; 1000+ Virtual Function calls a frame; Is there any way to optimize this?

I'm writing an object oriented wrapper for SDL2. I have decided to make 2 classes: a Sprite class, and a Rectangle Class. I used a bit of polymorphism so I don't have to overload the draw function for every drawable object. I have a base class Drawable which has a pure virutal function draw.
Now Sprite and Rect inherit Drawable and define the draw function to suit the style of class. Now, when I have to draw many things to the screen, I'm taking a pointer to Drawable, then calling the draw method. This is happening 1000+ times a second.
If I look at my CPU usage, it's about 10%. I know 1000+ Sprites being drawn every 60 times a second is going to hinder my CPU usage, but I didn't think it would this much.
Now, my question: Is there any way to optimize this? Maybe take out the pure virtual function and just overload the functions?
My Code (I tried to shorten it as much as possible):
Sprite::Draw Declaration
void Draw() override;
Sprite::Draw Function
void Sprite::Draw() {
// rect is SDL_Rect
rect.x = rect.x - window->getCamera().getCoords().x;
rect.y = rect.y - window->getCamera().getCoords().y;
SDL_RenderCopyEx(window->renderer, texture, NULL, &rect, 0, NULL, flip);
}
The Function that calls Sprite::Draw
void Window::Draw(Drawable *d) {
d->Draw();
}
Drawing Loop
// 1024 grass Sprites
for (int i = 0; i < 1024; i++) {
mainWindow.Draw(&grass[i]); // Calls Window::Draw
}
As I said earlier, it is eating up about 10% of my CPU usage. I have an AMD 6300, and a NVIDIA GTX 750Ti. I am using Microsoft Visual Studio Express 2013.
The executable name is OBA.exe
Can the textures for these grass sprites be condensed into 1 texturemap and then you draw a portion of it (the third param, srcrect -- https://wiki.libsdl.org/SDL_RenderCopyEx)?
I'm guessing you don't have 1024 unique grass textures, but a handful of repeated ones. Combine them all into one image and load that as a grass texture.
Your sprite class then needs to just define a SDL_Rect for which part of the texture it uses to draw.
Sample texture:
Say your sprite uses texture 0, it's srcrect would be 0,0,32,32. Drawing it just adds one parameter to your RenderCopy call.
SDL_RenderCopyEx(window->renderer, texture, &srcrect, &rect, 0, NULL, flip);
This should improve performance for drawing a lot of sprites. Additionally, you could only draw sprites if they are in the camera view.
I fixed it, what I did was just draw the Sprites that were currently in the Camera's view. The reason SDL_LogCritical was being called so much is being every SDL_RenderCopyEx call, SDL_LogCritical is called, along with other SDL functions. But by drawing just what the user can see, I got the CPU usage down to about 0.1%

Drawing with OpenGL without killing the CPU and without parallelizing

I'm writing a simple but useful OpenGL program for my work, which consists of showing how a vector field looks like. So the program simply takes the data from a file and draws arrows. I need to draw a few thousands of arrows. I'm using Qt for windows and OpenGL API.
The arrow unit is a cylinder and a cone, combined together in the function Arrow().
for(long i = 0; i < modifiedArrows.size(); i++) {
glColor4d(modifiedArrows[i].color.redF(),modifiedArrows[i].color.greenF(),
modifiedArrows[i].color.blueF(),modifiedArrows[i].opacity);
openGLobj->Arrow(modifiedArrows[i].fromX,modifiedArrows[i].fromY,
modifiedArrows[i].fromZ,modifiedArrows[i].toX,
modifiedArrows[i].toY,modifiedArrows[i].toZ,
simulationSettings->vectorsThickness);
}
Now the problem is that running an infinite loop to keep drawing this gets the CPU fully busy, which is not so nice. I tried as much as possible to remove all calculations from the paintGL() functions, and only simple ones remained. I end The paintGL() function with glFlush() and glFinish(), and yet I always have the main CPU full.
If I remove this loop, the CPU doesn't get too busy anymore. But I have to draw thousands of arrows anyway.
Is there any solution to this other than parallelizing?
You didn't pointed out on how you have implemented your openGLobj->Arrow method, but if you are using 100% CPU time on this, you are probably painting the arrows with immediate mode. This is really CPU intensive, because you have to transfer data from CPU to the GPU for every single instruction inside glBegin() and glEnd(). If you are using GLUT to draw your data, it's really ineficient too.
The way to go here is use GPU memory and processing power to dipslay your data. Phyatt has already pointed you some directions, but I will try to be more specific: use a Vertex Buffer Object (VBO).
The idea is to pre-allocate the needed memory for display your data on GPU and just update this chunk of memory when needed. This will probably make a huge difference on the efficience of your code, because you will use the efficient video card driver to handle the CPU->GPU transfers.
To illustrate the concept, I will show you just some pseudo-code in the end of the answer, but it's by no means completely correct. I didn't tested it and didn't had time to implement the drawing for you, but's it's a concept that can clarify your mind.
class Form
{
public:
Form()
{
// generate a new VBO and get the associated ID
glGenBuffers(1, &vboId);
// bind VBO in order to use
glBindBuffer(GL_ARRAY_BUFFER, vboId);
//Populate the buffer vertices.
generateVertices();
// upload data to VBO
glBufferData(GL_ARRAY_BUFFER_ARB, vertices.size(), vertices.data(), GL_STATIC_DRAW_ARB);
}
~Form()
{
// it is safe to delete after copying data to VBO
delete [] vertices;
// delete VBO when program terminated
glDeleteBuffersARB(1, &vboId);
}
//Implementing as virtual, because if you reimplement it on the child class, it will call the child method :)
//Generally you will not need to reimplement this class
virtual void draw()
{
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
//I am drawing the form as triangles, maybe you want to do it in your own way. Do it as you need! :)
//Look! I am not using glBegin() and glEnd(), I am letting the video card driver handle the CPU->GPU
//transfer in a single instruction!
glDrawElements(GL_TRIANGLES, vertices.size(), GL_UNSIGNED_BYTE, 0);
glDisableClientState(GL_VERTEX_ARRAY);
// bind with 0, so, switch back to normal pointer operation
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
}
private:
//Populate the vertices vector with the form vertices.
//Remember, any geometric form in OpenGL is rendered as primitives (points, quads, triangles, etc).
//The common way of rendering this is to use multiple triangles.
//You can draw it using glBegin() and glEnd() just to debug. After that, instead of rendering the triangles, just put
//the generated vertices inside the vertices buffer.
//Consider that it's at origin. You can use push's and pop's to apply transformations to the form.
//Each form(cone or cilinder) will have its own way of drawing.
virtual void generateVertices() = 0;
GLuint vboId;
std::vector<GLfloat> vertices;
}
class Cone : public Form
{
public:
Cone() : Form() {}
~Cone() : ~Form() {}
private:
void generateVertices()
{
//Populate the vertices with cone's formula. Good exercise :)
//Reference: http://mathworld.wolfram.com/Cone.html
}
GLuint vboId;
std::vector<GLfloat> vertices;
}
class Cilinder : public Form
{
public:
Cone() : Form() {}
~Cone() : ~Form() {}
private:
void generateVertices()
{
//Populate the vertices with cilinders's formula. Good exercise :)
//Reference: http://math.about.com/od/formulas/ss/surfaceareavol_3.htm
}
GLuint vboId;
std::vector<GLfloat> vertices;
}
class Visualizer : public QOpenGLWidget
{
public:
//Reimplement the draw function to draw each arrow for each data using the classes below.
void updateGL()
{
for(uint i = 0; i<data.size(); i++)
{
//I really don't have a clue on how you position your arrows around your world model.
//Keep in mind that those functions glPush, glPop and glMatrix are deprecated. I recommend you reading
//http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-3:-3D-transformation-and-projection.html if you want to implement this in the most efficient way.
glPush();
glMatrix(data[i].transform());
cilinder.draw();
cone.draw();
glPop();
}
}
private:
Cone cone;
Cilinder cilinder;
std::vector<Data> data;
}
As a final note, I can't assure you that this is the most efficient way of doing things. Probably, if you have a HUGE ammount of data, you would need some data-structure like Octrees or scene-graphs to optimize your code.
I recommend you taking a look at OpenSceneGraph or Visualization ToolKit to see if that methods are not already implemented for you, what would save you a lot of time.
Try this link for some ideas:
What are some best practices for OpenGL coding (esp. w.r.t. object orientation)?
Basically what I've seen that people do to increase their FPS and drop quality includes the following:
Using DisplayLists. (cache complex or repetitive matrix stacks).
Using Vertex Arrays.
Using simpler geometry with fewer faces.
Using simpler lighting.
Using simpler textures.
The main advantage of OpenGL is that is works with a lot of graphics cards, which are built to do a lot of the 4x4 matrix transformations, multiplications, etc, very quickly and they provide more RAM memory for storing rendered or partially rendered objects.
Assuming that all the vectors are changing so much and often that you can't cache any of the renderings...
My approach to this problem would be to simplify the drawing down to just lines and points, and get that to draw at the desired frame rate. (A line for your cylinder and a colored point on the end for the direction.)
After that draws fast enough, try making the drawing more complex, like a rectangular prism instead of a line, and a pyramid instead of a colored point.
Rounded objects typically require a lot more surfaces and calculations.
I am not an expert on the subject, but I would google other OpenGL tutorials that deal with optimization.
Hope that helps.
EDIT: Removed references to NeHe tutorials because of comments.

a good OO way to do this (c++)

i need some advice. i have 2 classes for a game that i am making, and those classes are:
Graphics
Sprite
the sprite class consists of an image buffer for the image, an (x, y) offset, and a width and a height. the graphics class has a buffer for the screen. the screen buffer can have things blitted to it like the image buffer for the sprite.
are there any recommended ways of blitting the sprite's image buffer to the graphics screen buffer? i had 2 ideas:
have a method like this (in the sprite class):
void Sprite::blit(SDL_Surface* screen)
{
// code goes here
}
or this (in the graphics class):
void Graphics::blit(Sprite sprite)
{
// code
}
or even this (also in the graphics class):
void Graphics::blit(SDL_Surface* aSpritesImageBuffer)
{
// code
}
there are problems with all of these tho. in both classes, i use encapsulation to hide both the sprite's image buffer and the graphics component's screen buffer. they are returned as a constant so no one can manipulate them without using the functions provided in the class. this is how i did it:
class Graphics
{
public:
const getScreenBuffer() const {return screenBuffer;}
private:
screenBuffer;
};
^ same with the sprite's image buffer.
so if i tried (in my main class)
void handleRendering()
{
graphics.blit(sprite.getImageBuffer());
}
that would not be very good?
or even:
void handleRendering()
{
graphics.blit(sprite);
}
and i don't think this is good:
void handleRendering()
{
sprite.blit(graphics.getScreenBuffer());
}
are there any better methods of doing this without getting errors like const to non-const? << i get an error like that.
I don't know if your sprite class is only a low-level rendering element (so, basically only a wrapper around SDL_Surface*), or if it's already the actual representation of a creature in your game. In the latter case, as an alternative to your different solutions, you could only keep an id of the bitmap in the sprite class, among other properties like coordinates, size, speed... And the actual code dependent of the rendering technology in a separate set of classes, like "Graphics", "BitmapCollection"...
So on one side, you would have a "clean" sprite class with simple properties like position, size, speed... and on the other side, the "dirty stuff", with low level SDL_nnn objects and calls. And one day, that id would not represent a bitmap, but for example a 3D model.
That would give something like that:
void handleRendering()
{
graphics.blit( bitmapCollection.Get( sprite.getImageId());
}
I don't know if the image of a sprite really has to be private or read only. Several sprites could share the same image, other classes like "SpecialEffects" could modify the sprite bitmaps, swap them, make semi-transparent following ghosts on screen and so on.
A common way to do this would be to have a container in your Graphics object that holds all of the sprites in the scene. Your main loop would call Graphics::blit(), and the Graphics object would iterate through the container of sprites and call the Sprite::blit(SDL_Surface* screen) function on each one passing in its screen buffer.