I spend much time trying to find solution but cannot. I hope you can help me. The code is bit longer so I give here just the part where I have the problem. My code captures bitmap from window and its saved in HBitmap. I need to do rotation of the bitmap. So I start GDI+ and create bitmap pBitmap from HBitmap:
// INIT GDI
ULONG_PTR gdiplusToken;
GdiplusStartupInput gdiplusStartupInput;
GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
if (!gdiplusToken) return 3;
// Gdip_GetRotatedDimensions:
GpBitmap* pBitmap;
int result = Gdiplus::DllExports::GdipCreateBitmapFromHBITMAP(HBitmap, 0, &pBitmap);
Then I calculate the variables needed for rotation. Then I create graphics object and tried to rotate the image:
GpGraphics * pG;
result = Gdiplus::DllExports::GdipGetImageGraphicsContext(pBitmap, &pG);
Gdiplus::SmoothingMode smooth = SmoothingModeHighQuality;
result = Gdiplus::DllExports::GdipSetSmoothingMode(pG, smooth);
Gdiplus::InterpolationMode interpolation = InterpolationModeNearestNeighbor;
result = Gdiplus::DllExports::GdipSetInterpolationMode(pG, interpolation);
MatrixOrder MatrixOrder_ = MatrixOrderPrepend;
result = Gdiplus::DllExports::GdipTranslateWorldTransform(pG, xTranslation, yTranslation, MatrixOrder_);
MatrixOrder_ = MatrixOrderPrepend;
result = Gdiplus::DllExports::GdipRotateWorldTransform(pG, ROTATION_ANGLE, MatrixOrder_);
GpImageAttributes * ImgAttributes;
result = Gdiplus::DllExports::GdipCreateImageAttributes(&ImgAttributes); // create an ImageAttribute object
result = Gdiplus::DllExports::GdipDrawImageRectRect(pG,pBitmap,0,0,w,h,0,0,w,h,UnitPixel,ImgAttributes,0,0); // Draw the original image onto the new bitmap
result = Gdiplus::DllExports::GdipDisposeImageAttributes(ImgAttributes);
Finally I wanted to check the image so I added:
CLSID pngClsid;
GetEncoderClsid(L"image/png", &pngClsid);
result = Gdiplus::DllExports::GdipCreateBitmapFromGraphics(w, h, pG, &pBitmap);
result = Gdiplus::DllExports::GdipSaveImageToFile(pBitmap, L"justest.png", &pngClsid, NULL); // last voluntary? GDIPCONST EncoderParameters* encoderParams
But my image is blank. I found out that GdipCreateBitmapFromGraphics creates blank image, but how should I finish it to check what drawings I have done? Are these steps correct (not just here but above, near GdipCreateBitmapFromHBITMAP() and GdipGetImageGraphicsContext() or I need to add something? How to get it working?
PS: I am sure that HBitmap contains picture of window, I already checked it.
To my eyes, you have some things backwards in your approach.
What you need to do is the following:
Read in your Image (src)
Find the minimum bounding rectangle that will contain the rotated image (ie, rotate the corners and the distance between min and max x and y are the dimensions.
Create a new Image object with these dimensions and the pixel format you want (likely the same as src, but maybe you want an alpha channel) and background color you want (dst)
Create a graphics based on dst (new Graphics(dst))
Set the appropriate transform on the graphics
Draw src onto dst
Export dst
The good news is that to make sure you're doing things right, you can isolate steps out.
For example, you can just make an image and a graphics and draw a line on it with no transform (or better a box with an X) and save that. If you have what you expect, then you're on the right path. Next add a transform to the box. In your case you'll need both a rotation and a translation. Next get the dimensions of the dest image right for that rotation (protip: don't use a square to test). Finally, do it with your actual image.
This will get you step-by-step to the correct output instead of trying to get the whole thing in one shot.
Related
How to read an image in C++ as a 2D array? I need to create a C/C++ program that reads an image (all formats) as a 2D array to show pixel values (0-255), divide the image into blocks and apply different compression methods using pixels blocks (BTC, AMBTC, MMBTC) and saving the new image by hand without using already set libraries (must not use magic++)..
thanks in advance
Here's some 'outline' code using MFC's CImage class that may help you. I've shown how to use the basic Load and Save options, and how to get a 'raw' array of pixel data (note: it's best to convert to 32-bit format, so you can be sure the DWORD pointer you get will really be to a width X height array - other BPP formats can give strange results):
First, load from file (CImage will know or guess the format from the file extension):
CImage original;
original.Load("Yourfile.jpg"); // Use actual file path, obviously
int pw = original.GetWidth(), ph = original.GetHeight(); // Dimensions
CImage working; // Use this to hold our 32-bit image
working.Create(pw, ph, 32);
// Next, copy image from original to working...
HDC hDC = working.GetDC();
original.BitBlt(hDC, 0, 0, SRCCOPY);
working.ReleaseDC();
// Get a DWORD pointer to the pixel data...
BITMAP bmp;
GetObject(working.operator HBITMAP(), sizeof(BITMAP), &bmp);
DWORD* pixbuf = static_cast<DWORD*>(bmp.bmBits);
// We can now access any pixel(x,y) data using: pixbuf[x + y * pw]
You now do all sorts of work on your image buffer, using the pixbuf array, as stated in the comment. For clarity: each DWORD (32-bit unsigned) in the buffer will be the RGBA data (where A is the 'alpha` channel - set to zero) but in reversed order; so, for each DWORD, the bytes will be 0xBBBBGGGGRRRR0000.
When you're done, you can save your modified image as follows:
CImage savepic;
savepic.Create(pw, ph, 24); // Change 24 to whatever BPP you require
hDC = savepic.GetDC();
working.BitBlt(hDC, 0, 0, SRCCOPY); // Copies modified image to output
savepic.ReleaseDC();
savepic.Save("NewFile.jpg"); // CImage understand what format to use base on extension
Of course, in a real-world program, there are error checks that you will need to make (most CImage methods return a status indicator, and GetLastError() can be used), and you would probably be safer copying the 'pixbuf' data to a separate memory zone - but, hopefully, this brief outline will help you get started.
Feel free to ask for further clarification and/or explanation.
I've been tearing my hair out over how to do this simple effect. I've got an image (see below), and when this image is used in a game, it produces a clockwise transition to black effect. I have been trying to recreate this effect in SDL(2) but to no avail. I know it's got something to do with masking but I've no idea how to do that in code.
The closest I could get was by using "SDL_SetColorKey" and incrementing the RGB values so it would not draw the "wiping" part of the animation.
Uint32 colorkey = SDL_MapRGBA(blitSurf->format,
0xFF - counter,
0xFF - counter,
0xFF - counter,
0
);
SDL_SetColorKey(blitSurf, SDL_TRUE, colorkey);
// Yes, I'm turning the surface into a texture every frame!
SDL_DestroyTexture(streamTexture);
streamTexture = SDL_CreateTextureFromSurface(RENDERER, blitSurf);
SDL_RenderCopy(RENDERER, streamTexture, NULL, NULL);
I've searched all over and am now just desperate for an answer for my own curiosity- and sanity! I guess this question isn't exactly specific to SDL; I just need to know how to think about this!
Arbitrarily came up with a solution. It's expensive, but works. By iterating through every pixel in the image and mapping the colour like so:
int tempAlpha = (int)alpha + (speed * 5) - (int)color;
int tempColor = (int)color - speed;
*pixel = SDL_MapRGBA(fmt,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempAlpha
);
Where alpha is the current alpha of the pixel, speed is the parameterised speed of the animation, and color is the current color of the pixel. fmt is the SDL_PixelFormat of the image. This is for fading to black, the following is for fading in from black:
if ((255 - counter) > origColor)
continue;
int tempAlpha = alpha - speed*5;
*pixel = SDL_MapRGBA(fmt,
(Uint8)0,
(Uint8)0,
(Uint8)0,
(Uint8)tempAlpha
);
Where origColor is the color of the pixel in the original grayscale image.
I made a quick API to do all of this, so feel free to check it out: https://github.com/Slynchy/SDL-AlphaMaskWipes
I have images of humans where I want to eliminate some pattern. Please, take a look on the three images below:
GrabCut extracted the figure (second picture) from the first image without any problem.
Now I have a rectangle corresponding to the face (circle on the second picture) and I want to use it as "background" (while the rest of the image would be a combination of a foreground and background) to eliminate skin leaving the clothes only.
The approximately desired result is on the third picture:
Is there any way to make GrabCut to do it? I cannot assign the areas/masks manually, all that I have is a rectangle provided by the face detection.
UPD: In the code below I try to do it using mask, but stage 2 doesn't seem to work (because at least the face should be cut). Maybe I just do not understand how it is working, as I have just modified another example The code is working:
static Mat my_segment(Mat _inImage, Rect assumption, Rect face){
// human segmentation opencv
// _inImage - input image
// assumption - human rectangle on _inImage
// face - face rectangle on _inImage
// iterations - is being set externally
/*
GrabCut segmentation
*/
Mat bgModel,fgModel; // the models (internally used)
Mat result; // segmentation result
//*********************** step1: GrabCut human figure segmentation
grabCut(_inImage, // input image
result, // segmentation result
assumption,// rectangle containing foreground
bgModel,fgModel, // models
iterations, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// upsample the resulting mask
cv::Mat separated(assumption.size(),CV_8UC3,cv::Scalar(255,255,255));
_inImage(assumption).copyTo(separated,result(assumption));
// (bg pixels not copied)
//return(separated); // return the innerings of assumption rectangle
//*********************** step2:
//cutting the skin with the mask based on the face rectangle
Rect adjusted_face = face;
adjusted_face.x = face.x - assumption.x;
adjusted_face.y = face.y - assumption.y;
//rectangle(separated,
// adjusted_face.tl(),
// adjusted_face.br(),
// cv::Scalar(166,94,91), 2);
//creating mask
Mat mymask(separated.size(),CV_8UC1);
// setting face area as sure background
mymask.setTo(Scalar::all(GC_PR_FGD));
mymask(adjusted_face).setTo(Scalar::all(GC_BGD));
// performing grabcut
grabCut(separated,
mymask,
cv::Rect(0,0,assumption.width,assumption.height),
bgModel,fgModel,
1,
cv::GC_INIT_WITH_MASK);
// Just repeating everything from before
// Get the pixels marked as likely foreground
cv::compare(mymask,cv::GC_PR_FGD,mymask,cv::CMP_EQ);
//here was the error
//separated.copyTo(separated,mymask); // bg pixels not copied
cv::Mat res(separated.size(),CV_8UC3,cv::Scalar(255,255,255));
separated.copyTo(res,mymask); // bg pixels not copied
//*************************//
//return(separated); // return the innerings of assumption rectangle
return(res);
}
Okay, I found the mistake, instead of
separated.copyTo(separated,mymask);
the last lines should be changed to:
cv::Mat res(separated.size(),CV_8UC3,cv::Scalar(255,255,255));
separated.copyTo(res,mymask); // bg pixels not copied
//*************************//
return(res); // return the innerings of assumption rectangle
Also one needs much more iterations for the second grabcut call, something like 5-7 iterations.
The results are not so good, so I welcome the answers that can improve it.
I am writing a program, to display two cameras next to each other. In Qt it is pretty simple with the QCamera. But my Cameras are turned by 90°, so I have to turn the Camera in the porgram too.
The QCamera variable has no command to turn it, so I want to display it in a label, and not in a viewfinder. So I take an Image, turn it and display it in a label.
QImage img;
QPixmap img_;
img = ui->viewfinder->grab().toImage();
img_ = QPixmap::fromImage(img);
img_ = img_.transformed(QTransform().rotate((90)%360));
QImage img2;
QPixmap img2_;
img2 = ui->viewfinder->grab().toImage();
img2_ = QPixmap::fromImage(img2);
img2_ = img2_.transformed(QTransform().rotate((90)%360));
ui->label->setPixmap(img_);
ui->label_2->setPixmap(img2_);
When I start the program there are just two black boxes next to each other.
(In the code there is missing the part where I deklare it, but the camera works fine in the viewfinder so I think there is no problem)
Try this:
img_ = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1); (for take a snapshot as QPixmap)
or
img = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1).toImage(); (for take a snapshot as QImage)
You can use the orientation of the camera to correct the image orientation in view finder as described in Qt documentation. Here is the link:
http://doc.qt.io/qt-5/cameraoverview.html
and here is the code found in the documentation:
// Assuming a QImage has been created from the QVideoFrame that needs to be presented
QImage videoFrame;
QCameraInfo cameraInfo(camera); // needed to get the camera sensor position and orientation
// Get the current display orientation
const QScreen *screen = QGuiApplication::primaryScreen();
const int screenAngle = screen->angleBetween(screen->nativeOrientation(), screen->orientation());
int rotation;
if (cameraInfo.position() == QCamera::BackFace) {
rotation = (cameraInfo.orientation() - screenAngle) % 360;
} else {
// Front position, compensate the mirror
rotation = (360 - cameraInfo.orientation() + screenAngle) % 360;
}
// Rotate the frame so it always shows in the correct orientation
videoFrame = videoFrame.transformed(QTransform().rotate(rotation));
It looks like you don't even understand, what you are looking at...
Whats the purpose of pasting stuff like that to forum? Did you read ALL description about this? Its only part of the code that - i see You dont understand, but you try to be smart :)
I have *.png files and I want to get different 8x8 px parts from textures and place them on bitmap (SDL_Surface, I guess, but maybe not), smth like this:
Now I'm rendering that without bitmap, i.e. I call each texture and draw part directly on screen each frame, and it's too slow. I guess I need to load each *.png to separate bitmap and use them passing video memory, then call just one big bitmap, but maybe I'm wrong. I need the fastest way of doing that, I need code of this (SDL 2, not SDL 1.3).
Also maybe I need to use clear OpenGL here?
Update:
Or maybe I need to load *.png's to int arrays somehow and use them just like usual numbers and place them to one big int array, and then convert it to SDL_Surface/SDL_Texture? It seems this is the best way, but how to write this?
Update 2:
Colors of pixels in each block are not the same as it presented at the picture and also can they be transparent. Picture is just an example.
Assumming you already have your bitmaps loaded up as SDL_Texture(s), composing them into a different texture is done via SDL_SetRenderTarget .
SDL_SetRenderTarget(renderer, target_texture);
SDL_RenderCopy(renderer, texture1, ...);
SDL_RenderCopy(renderer, texture2, ...);
...
SDL_SetRenderTarget(renderer, NULL);
Every render operation you perform between setting your Render Target and resetting it (by calling SDL_SetRenderTarget with a NULL texture parameter) will be renderer to the designated texture. You can then use this texture as you would use any other.
Ok so, when I asked about "solid colour", I meant - "in that 8x8 pixel area in the .png that you are copying from, do all 64 pixels have the same identical RGB value?" It looks that way in your diagram, so how about this:
How about creating an SDL_Surface, and directly painting 8x8 pixel areas of the memory pointed to by the pixels member of that SDL_Surface with the values read from the original .png.
And then when you're done, convert that surface to an SDL_Texture and render that?
You would avoid all the SDL_UpdateTexture() calls.
Anyway here is some example code. Let's say that you create a class called EightByEight.
class EightByEight
{
public:
EightByEight( SDL_Surface * pDest, Uint8 r, Uint8 g, Uint8 b):
m_pSurface(pDest),
m_red(r),
m_green(g),
m_blue(b){}
void BlitToSurface( int column, int row );
private:
SDL_Surface * m_pSurface;
Uint8 m_red;
Uint8 m_green;
Uint8 m_blue;
};
You construct an object of type EightByEight by passing it a pointer to an SDL_Surface and also some values for red, green and blue. This RGB corresponds to the RGB value taken from the particular 8x8 pixel area of the .png you are currently reading from. You will paint a particular 8x8 pixel area of the SDL_Surface pixels with this RGB value.
So now when you want to paint an area of the SDL_Surface, you use the function BlitToSurface() and pass in a column and row value. For example, if you divided the SDL_Surface into 8x8 pixel squares, BlitToSurface(3,5) means the paint the square at the 4th column, and 5th row with the RGB value that I set on construction.
The BlitToSurface() looks like this:
void EightByEight::BlitToSurface(int column, int row)
{
Uint32 * pixel = (Uint32*)m_pSurface->pixels+(row*(m_pSurface->pitch/4))+column;
// now pixel is pointing to the first pixel in the correct 8x8 pixel square
// of the Surface's pixel memory. Now you need to paint a 8 rows of 8 pixels,
// but be careful - you need to add m_pSurface->pitch - 8 each time
for(int y = 0; y < 8; y++)
{
// paint a row
for(int i = 0; i < 8; i++)
{
*pixel++ = SDL_MapRGB(m_pSurface->format, m_red, m_green, m_blue);
}
// advance pixel pointer by pitch-8, to get the next "row".
pixel += (m_pSurface->pitch - 8);
}
}
I'm sure you could probably speed things up further by pre-calculating an RGB value on construction. Or if you're reading a pixel from the texture, you could probably dispense with the SDL_MapRGB() (but it's just there in case the Surface has different pixel format to the .png).
memcpy is probably faster than 8 individual assignments to the RGB value - but I just want to demonstrate the technique. You could experiment.
So, all the EightByEight objects you create, all point to the same SDL_Surface.
And then, when you're done, you just convert that SDL_Surface to an SDL_Texture and blit that.
Thanks to everyone who took part, but we solved it with my friends. So here is an example (source code is too big and unnecessary here, I'll just describe the main idea):
int pitch, *pixels;
SDL_Texture *texture;
...
if (!SDL_LockTexture(texture, 0, (void **)&pixels, &pitch))
{
for (/*Conditions*/)
memcpy(/*Params*/);
SDL_UnlockTexture(texture);
}
SDL_RenderCopy(renderer, texture, 0, 0);