Preserve DrawingArea 'image' on draw signal - c++

I am trying to make simple square where you could paint with mouse. Problem is, whenever draw signal is happens, cairo surface seems to be cleared entirely. I understand this because after first queue_draw() white background is gone and I see my GTK theme color (which is grey).
I thought I could save surface or context, but you can't just create empty surface in cairo, and I can't create it using this->get_window()->create_cairo_surface() (where this is object of class inherited from Gtk::DrawingArea) because when constructor is called, widget isn't attached to any window yet, so it is a null pointer. I mean, I could create some public function called you_are_added_to_window_create_cairo_surface() but I'd really like not to do this.
So I really don't know what to do and what I don't understand about cairo.
How do I preserve, or save 'canvas' current state, so whatever is actually being drawn is just applied on existing drawing?
Here is callback function of my class:
bool MyDrawingArea::on_draw(const Cairo::RefPtr<Cairo::Context> & cr) {
/* clear and fill background with white in the beginning */
if (first_draw) {
cr->save();
cr->set_source_rgb(255.0, 255.0, 255.0);
cr->paint();
cr->restore();
first_draw = false;
}
cr->save();
cr->set_source_rgb(0.0, 0.0, 0.0);
cr->begin_new_path();
while (!dots_queue.empty()) {
auto dot = dots_queue.front();
cr->line_to(dot.first, dot.second);
dots_queue.pop();
}
cr->close_path();
cr->stroke();
cr->restore();
return false;
}

Remove first_draw and instead of dots_queue.pop(), just iterate over the dots_queue and redraw all of them each time.
The draw function is not meant for "I want to add some drawing". Instead, it is "hey, the windowing system has no idea what should be drawn here, please fill this with content". That's why the cairo surface is cleared.

So while storing all actions works, it's really not ok if you are trying to have your program save your drawings, you will have to use second surface to save everything on.
My solution combines both answers of Uli Schlachter.
First, I have structure, in which I store last drawing action, since last Button Press, and until Button Release. This allows me to show things such as lines in real time, while keeping canvas clean of it.
Second, I store everything drawn on canvas on a surface, which is created like that:
// this - is object of class, derived from DrawingArea
auto allocation = this->get_allocation();
this->surface = Cairo::ImageSurface::create(
Cairo::Format::FORMAT_ARGB32,
allocation.get_width(),
allocation.get_height()
);
Then, on each draw signal, I restore it like that:
cr->save();
cr->set_source(surface, 0.0, 0.0);
cr->paint();
cr->restore();
Whenever I want to save surface, i.e. apply drawing on to canvas, I do the following:
Cairo::RefPtr<Cairo::Context> t_context = Cairo::Context::create(surface);
t_context->set_source(cr->get_target(), -allocation.get_x(), -allocation.get_y());
t_context->paint();
Here is the important moment. Without adjusting for the allocation coordinates, your canvas is going to slide away on each surface save and restore.
With that, I can easily keep my drawings on canvas, load canvas from file (because I am using ImageSurface), or save it to the file.

Related

3d object wont update in for loop

I am trying to rotate a 3d object but it doesnt update when applying transforms in a for loop.
The object jumps to the last position.
How does one update a 3d object's position in a sequence of updates if it wont update in a for loop?
Just calling glTranslate, glRotate or such won't change things on the screen. Why? Because OpenGL is a plain drawing API, not a scene graph. All it knows about are points, lines and triangles that draws to a pixel framebuffer. That's it. You want to change something on the screen, you must redraw it, i.e. clear the picture, and draw it again, with the changes.
BTW: You should not use a dedicated loop to implement animations (neither for, nor while, nor do while). Instead perform animation in the idle handler and issue a redraw event.
I reckon you have a wrong understanding what OpenGL does for you.
I'll try to outline:
- Send vertex data to the GPU (once)
(this does only specify the (standard) shape of the object)
- Create matrices to rotate, translate or transform the object (per update)
- Send the matrices to the shader (per update)
(The shader then calculates the screen position using the original
vertex position and the transformation matrix)
- Tell OpenGL to draw the bound vertices (per update)
Imagine programming with OpenGL like being a web client - only specifying the request (changing the matrix and binding stuff) is not enough, you need to explicitly send the request (send the transformation data and tell OpenGL to draw) to receive the answer (having objects on the screen.)
It is possible to draw an animation from a loop.
for ( ...) {
edit_transformation();
draw();
glFlush(); // maybe glutSwapBuffers() if you use GLUT
usleep(100); // not standard C, bad
}
You draw, you flush/swap to make sure that what you just drew is sent to the screen, and you sleep.
However, it is not recommended to do this in an interactive application. The main reason is that while you are in this loop, nothing else can run. Your application will be unresponsive.
That's why window systems are event-based. Every few miliseconds, the window system pings your app so you can update your state, for example do animation. This is the idle function. When the state of your program changed, you tell the window system that you would like to draw again. It is then up the the window system to call your display function. You do your OpenGL calls when the system tells you to.
If you use GLUT for communicating with the window system, this looks like the code below. Other libraries like GLFW have equivalent functions.
int main() {
... // Create window, set everything up.
glutIdleFunc(update); // Register idle function
glutDisplayFunc(display); // Register display function
glutMainLoop(); // The window system is in charge from here on.
}
void update() {
edit_transformation(); // Update your models
glutPostRedisplay(); // Tell the window system that something changed.
}
void display() {
draw(); // Your OpenGL code here.
glFlush(); // or glutSwapBuffers();
}

Qt GraphicsItem transformation affects all items in scene

I have this embedded Qt application that uses the QGraphics framework to display a web view.
The dimensions of the web view are 1280*720 pixels, and the QGraphicsView is set to render the scene at these coordinates (0,0, 1280x720).
I'm trying to add a loading indicator on the top right corner (at 1100,50), which is a simple PNG image that I rotate every now and then using a QTimeLine.
Code looks like this (I found the transformation trick on the internet):
// loading_indic initialization:
QGraphicsPixmapItem *loading_indic =
new QGraphicsPixmapItem( QPixmap("./resources/loading_64.png") );
loading_indic->setPos(QPoint(1100.0,50.0));
QTimeLine timeline = new QTimeLine(1000);
timeline->setFrameRange(0,steps);
connect(timeline, SIGNAL(valueChanged(qreal)), this, SLOT(updateStep(qreal)));
timeline->start();
// called at each step of a QTimeLine:
void updateStep(qreal step) {
QTransform transformation = QTransform()
// place coordinate system to the center of the image
.translate( width/2.0, height/2.0)
// rotate the image in this new coordinate system
.rotate(new_angle)
// replace the coordinate system to the original
.translate( -width/2.0, -height/2.0);
loading_indic->setTransform(transformation);
}
Now, my problem is that when doing this, it looks like the WebView is translated as well, resulting in everything being displayed in the center of the screen.
Result looks like this:
The webview is supposed to fill the screen, and the loading indicator should be on top right...
My scene contains only two items:
Scene
|
\____ QGraphicsWebView
\____ QGraphicsPixmapItem // loading indicator
What am I doing wrong here?
Solved my problem..
I don't know why, but it looks like adding this PNG item to the scene was screwing up with the scene's rectangle.
Doing this:
_scene.addItem(loading_indic);
loading_indic->setPos(1100.0, 50.0);
_scene.setSceneRect(0.0,0.0,1280.0,720.0); // resets the scene's rectangle ?!
loading_indic->startAnimation();
solved the problem. Now my items are correctly placed on screen.
If somebody has an explanation to this, I'll gladly accept his answer.

screen size in cocos2d

i would like to change the screen size so that sprites will disappear before they reach the real screen edges.
BUT i still want my background to stay on all of the screen size.
Imagine a paper on my screen so i want to game to exist only on that paper, and around that paper there still will be some background.
so, how do i set my CCSprites to move in and out from that paper and slowly disappear when coming to its edges ?
my sprites are moves with : (i need to put some code to get published cause site "standard" )
id moveclouds1 = [CCMoveTo actionWithDuration:30 position:ccp(420,380)];
thanks.
you can use glscissor for that
simply subclass a CCLayer to make your "paper screen". Then add you sprites inside this layer.
on this layer override the visit method
- (void) visit
{
glPushMatrix();
glEnable(GL_SCISSOR_TEST);
glScissor(x,y, width, height); //here put the position and the size of the paper/screen
[super visit];
glDisable(GL_SCISSOR_TEST);
glPopMatrix();
}
a sprite reaching the border of the paper/screen will be scissored off.
REMEMBER: glScissor will use PIXEL values not points, so it`s your job to use double values for retina display ( CC_CONTENT_SCALE_FACTOR() can come in handy )

display lists wont run

I am trying to create a tool that will draw a shape in openGL and then modify the values of the properties of that shape in a windows form. So if my shape is a rectangle, I will create a form that will allow the user to control the size, color etc of the rectangle. I have written the openGL code in managed c++ and the form in c#, and as some of these shapes got more complicated I decided to make display lists for them (for both performance and predictability purposes).
I define the display list in the constructor for the shape and I call the display lists in the render method.
My issue is that my display lists won't run at all. The parts that I render outside of a display list will be rendered, but the parts inside the display list will not be rendered.
Here's some sample code of my process:
//c# side
GLRectangle rect
public CSharpRectangle() {
rect = new GLRectangle();
}
//managed c++ side
public GLRectangle() {
width = 50;
height = 50;
//initialize more values
rectDL = glGenLists(1);
glNewList(rectDL, GL_COMPILE);
renderRect();
glEndList();
}
public render() {
//Draw border
glBegin(GL_LINE_LOOP);
glVertex2f(0, 0);
glVertex2f(width, 0);
glVertex2f(width, height);
glVertex2f(0, height);
glEnd();
//Draw interior
glCallList(rectDL);
}
private renderRect() {
glRectf(0,0,width,height);
}
In this example, the border of the rectangle would be rendered, but the rectangle itself won't be rendered... if I replace the display list with simply a method call, the rectangle is rendered fine. Does anyone know why this might be happening?
I want to give my 2 cents.
The code in your question seems correct to me, so probably there something else in your application that make your display list not runnable.
The only thing I can think is there's no current context when compiling the display list (indeed when executing GlRectangle constructor). So, is that routine executed in the same thread which have called glMakeCurrent? Is that routine called after glMakeCurrent?
Further, check with glGetError after each OpenGL routine in order to validate the operation. In the case it returns an error, you can know what's wrong in your code..
The reason you may not get what you want is simply because it isn't there anymore. In time I was reading openGL Red book, I've noticed that display lists were deprecated in openGL 3.1 and higher (means simply removed) and googling for that confirmed it. I don't remember reason anymore, but I believe because it was messing with VAOs and VBOs. So if you are using higher than opengl 3.1 you won't get display lists anymore.

How to save to a bitmap in MFC C++ application?

I am just starting with MFC so please be tolerant ;).
I have wrote (it was mostly generated to be honest) a simple application which should do the Paint chores: drawing lines, rectangulars, ellipses, changing a color of object to be drawn etc.
I need to save what has been drawn on the screen into a bmp file. Any ideas how can I achieve this ?
I do not know if that's relevant but I am drawing objects on the screen without the use of any CBitmaps or things like that. Here is a part of code responsible for drawing :
CPaintDoc* pDoc = GetDocument();
ASSERT_VALID(pDoc);
Anchor.x=point.x;
Anchor.y=point.y;
OldPoint.x=Anchor.x;
OldPoint.y=Anchor.y;
if(pDoc->shapeCount>=MAX_SHAPES) return;
pDoc->shapeCount++;
if(bFreehand)
{
pDoc->m_shape[pDoc->shapeCount-1] = new Shape;
pDoc->m_shape[pDoc->shapeCount-1]->shape = ePoint;
}
if(bLine)
{
pDoc->m_shape[pDoc->shapeCount-1] = new CLine;
pDoc->m_shape[pDoc->shapeCount-1]->shape = eLine;
}
if(bRectangle)
{
pDoc->m_shape[pDoc->shapeCount-1] = new CRectangle;
pDoc->m_shape[pDoc->shapeCount-1]->shape = eRectangle;
}
if(bEllipse)
{
pDoc->m_shape[pDoc->shapeCount-1] = new CEllipse;
pDoc->m_shape[pDoc->shapeCount-1]->shape=eEllipse;
}
pDoc->m_shape[pDoc->shapeCount-1]->x=point.x;
pDoc->m_shape[pDoc->shapeCount-1]->y=point.y;
pDoc->m_shape[pDoc->shapeCount-1]->x2=point.x;
pDoc->m_shape[pDoc->shapeCount-1]->y2=point.y;
pDoc->m_shape[pDoc->shapeCount-1]->Pen=CurrentPen;
pDoc->m_shape[pDoc->shapeCount-1]->Brush=CurrentBrush;
bButtonDown=true;
SetCapture();
I have found this way to do it but I don't know how to obtain screen width and height to fill it in the CreateBitmap parameter's list
CBitmap *bitmap;
bitmap.CreateBitmap(desktopW, desktopH, 1, 32, rgbData);
CImage image;
image.Attach(bitmap);
image.Save(_T("C:\\test.bmp"), Gdiplus::ImageFormatBMP);
The CreateBitmap call only requires the desktop width and height if the image you wish to save is actually the entire size of the screen. If that's indeed your intent, you can use CWnd::GetDesktopWindow() to get a CWnd object that you can query for its width and height:
http://msdn.microsoft.com/en-us/library/bkxb36k8(v=VS.80).aspx
That gets dodgy in general...if for no other reason than multi-monitor scenarios...so I'd recommend against it unless you really feel like writing a screen capture app.
What you probably want to do isn't to take a full screen shot, but just save the contents of your program's window. Typically you'd do this by breaking out the drawing logic of your program so that in the paint method you call a helper function that is written to take a CDC device context. Then you can either call that function on the window-based DC you get in the paint call or on a DC you create from the bitmap to do your save. Note that you can use a CBitmap in CDC::SelectObject:
http://msdn.microsoft.com/en-us/library/432f18e2(v=VS.71).aspx
(Though let me pitch you on not using MFC. Try Qt instead. Way better.)