Drawing on top of QImage using QPainter - c++

I'm essentially trying to include an odometer in my application.For that, I want to display a PNG image of an odometer, and then draw a line to represent the needle.I have used QImage on a Qlabel to display the image, however, I can't seem to draw anything on it using QPainter.The following is my code:
QImage img(200,200,QImage::Format_ARGB32);
img.load("C:/meter.png");
QPainter paint(&img);
paint.begin(&img);
paint.setPen(Qt::blue);
paint.drawLine(0,0,500,1080);
paint.end();
ui->cont->setPixmap(QPixmap::fromImage(img));
ui->cont->show();
I am afraid it might have something to do with qts coordinate system.I know that origin is the top left of your screen and 1 pixel represents 1 increment in the coordinates, is there something I can do to get the location of a point on screen and shift the coordinates to it?

Related

VTK - how to flip\mirror image

I'm using vtkResliceImageViewer to display image (multi-planar reconstruction). How can I flip\mirror that image vertically and horizontally? Operating with camera is not working as expected, since flip has to take into consideration also camera rotation angle, so it gets very complicated. It would be great if there is a way to change image's texture coordinates. Is this possible?
// Create an image
vtkSmartPointer<vtkImageMandelbrotSource> source =
vtkSmartPointer<vtkImageMandelbrotSource>::New();
source->Update();
// Flip the image
vtkSmartPointer<vtkImageFlip> flipYFilter =
vtkSmartPointer<vtkImageFlip>::New();
flipYFilter->SetFilteredAxis(1); // flip y axis
flipYFilter->SetInputConnection(source->GetOutputPort());
flipYFilter->Update();
// Create the Viewer
vtkSmartPointer<vtkResliceImageViewer> viewer =
vtkSmartPointer<vtkResliceImageViewer>::New();
viewer->SetInputData(flipYFilter->GetOutput())

How can I move the screen using SDL2, C++?

I have a window with a width of 260 pixels. By using the DrawSurface function I can put an image on the position which is not visible on the screen, for example (500, 10). Now I want to move the screen (by pressing the button) to the point where is the image. Is it possible?
I'm not sure how accurate or up-to-date this article is but it gives a lot of starting code for implementing a makeshift camera using an SDL_Rect variable. In your case, you would modify the x and y variables of the camera object and use the apply_surface() method to show textures relative to the camera's position.

Create mask to select the black area

I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.

Zoom image inside preview

I am writing a GUI application that works on Mac and Win and there is one little problem, which i do not know how to solve.
In my application I have a small (250 x 250 px) preview window (let' call it SW) in which placed the image. Image can be much bigger, than SW. Somewhere I have a slider which implements zoom function of image inside SW. My main problem is implement zoom function on this image.
On enter I have:
source image and it's width and height;
view image - it is zoomed copy of source image;
position of zoomed image
size of viewport is 250 x 250 px
It should works like zoom in image processing programs. When we changing our zoom value image becomes smaller or bigger relative to viewport center and position of image inside it. We can move image inside of viewport.
For correct implementation of that problem we need to calculate images size and position inside our view. I'm already wrote some "algo" that implements image size modification.
It is looks like:
float one = (source_original_size - thumbnail_size) / 100;
int bigger_side_size = qRound((100-value) * one) + thumbnail_size;
But I can not imagine how I can calculate position on scene of that zoomed image.
Can anybody help me with ideas?
If it is important I am using Qt framework and QGraphicsView, QGraphicsScene and QGraphicsPixmapItem.
Take a look at the Image Viewer Example, it has some features that you are looking for.

transparent colour being shown some of the time

I am using a LPDIRECT3DTEXTURE9 to hold my image.
This is the function used to display my picture.
int drawcharacter(SPRITE& person, LPDIRECT3DTEXTURE9& image)
{
position.x = (float)person.x;
position.y = (float)person.y;
sprite_handler->Draw(
image,
&srcRect,
NULL,
&position,
D3DCOLOR_XRGB(255,255,255));
return 0;
}
According to the book I have the RGB colour shown as the last parameter will not be displayed on screen, this is how you create transparency.
This works for the most part but leaves a pink line around my image and the edge of the picture. After trial and error I have found that if I go back into photoshop I can eliminate the pink box by drawing over it with the pink colour. This can be see with the ships on the left.
I am starting to think that photoshop is blending the edges of the image so that background is not all the same shade of pink though I have no proof.
Can anyone help fix this by programming or is the error in the image?
If anyone is good at photoshop can they tell me how to fix the image, I use png mostly but am willing to change if necessary.
edit: texture creation code as requested
character_image = LoadTexture("character.bmp", D3DCOLOR_XRGB(255,0,255));
if (character_image == NULL)
return 0;
You are loading a BMP image, which does not support transparency natively - the last parameter D3DCOLOR_XRGB(255,0,255) is being used to add transparency to an image which doesn't have any. The problem is that the color must match exactly, if it is off even by only one it will not be converted to transparent and you will see the near-magenta showing through.
Save your images as 24-bit PNG with transparency, and if you load them correctly there will be no problems. Also don't add the magenta background before you save them.
As you already use PNG, you can just store the alpha value there directly from Photoshop. PNG supports transparency out of the box, and it can give better appearance than what you get with transparent colour.
It's described in http://www.toymaker.info/Games/html/textures.html (for example).
Photoshop is anti-aliasing the edge of the image. If it determines that 30% of a pixel is inside the image and 70% is outside, it sets the alpha value for that pixel to 70%. This gives a much smoother result than using a pixel-based transparency mask. You seem to be throwing these alpha values away, is that right? The pink presumably comes from the way that Photoshop displays partially transparent pixels.