Capturing camera(USB) via D2XX library or OPENCV - c++

I want to write an application(in c++) in order to capture images from a camera which is using in a acquisition system. The camera is connected to a box (acquisition system) and I've found that the chip that is used is FTDI. The chip is located in the box between camera and PC. The camera is connected to this box. A USB cable is connected to a PC and the box. Some other tools are connected to the box which are not important.
Moreover, there is a simple commercial application which is written by MFC and I want to do exactly the same. In the folder of the application there are D2XX driver files(ftd2xx.h, etc) and an information file(*.inf) of the camera.
Also, the camera is not recording video but taking photo in short intervals(<0.1s) and the interval is determined by the acquisition system not the commercial application(the acquisition system detect when camera has to take photo)
Here is my question:
Since the information file of the USB device is provided, could I just utilize the Open-CV lib to capture the camera OR do I have to only use D2XX library?
If I have to use D2XX library in order to read the data, how could I convert the raw data to Image format (in Qt)?
I can't simply write application and test on the device over and over to find the solution since the device is located far from my location and for every test I have to travel this distance. So, I want to make sure that my application will work.
A company from China made the device for us and they won't support it any more :(

The camera uses a custom communications protocol, it doesn't implement an imaging device class. OpenCV won't see it, neither will any other multimedia library. No matter what, you need to implement that protocol. You can then expose it to OpenCV if you wish.

To convert into image, try this:
Mat hwnd2mat(HWND hwnd){
HDC hwindowDC, hwindowCompatibleDC;
int height, width, srcheight, srcwidth;
HBITMAP hbwindow; // <-- The image represented by hBitmap
cv::Mat src; // <-- The image represented by mat
BITMAPINFOHEADER bi;
// Initialize DCs
hwindowDC = GetDC(hwnd); // Get DC of the target capture..
hwindowCompatibleDC = CreateCompatibleDC(hwindowDC); // Create compatible DC
SetStretchBltMode(hwindowCompatibleDC, COLORONCOLOR);
RECT windowsize; // get the height and width of the screen
GetClientRect(hwnd, &windowsize);
srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = windowsize.bottom *2/ 2; //change this to whatever size you want to resize to
width = windowsize.right *2/ 2;
src.create(height, width, CV_8UC4);
// create a bitmap
hbwindow = CreateCompatibleBitmap(hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER);
bi.biWidth = width;
bi.biHeight = -height; //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt(hwindowCompatibleDC, 0, 0, width, height, hwindowDC, 0, 0, srcwidth, srcheight, SRCCOPY); //change SRCCOPY to NOTSRCCOPY for wacky colors !
GetDIBits(hwindowCompatibleDC, hbwindow, 0, height, src.data, (BITMAPINFO *)&bi, DIB_RGB_COLORS); //copy from hwindowCompatibleDC to hbwindow
// avoid memory leak
DeleteObject(hbwindow); DeleteDC(hwindowCompatibleDC); ReleaseDC(hwnd, hwindowDC);
return src;
}

Related

Video recording of the window on OpenCV C++

I am trying to write a program for recording windows, but for some reason, after the program finishes, I get a corrupted .avi file.
I don't understand what the problem is. The hwnd2mat() and windowNames() functions work correctly, the error is clearly not in it. The code looks massive, but in fact, most of the code is occupied by the translation of the image from the HWND to the Mat. Also it should be noted that the resulting image after recording, always has the same size (irrespective of the recording time).
#include <iostream>
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <opencv2/videoio.hpp>
#include <Windows.h>
BOOL CALLBACK windowNames(HWND hwnd, LPARAM lParam) {
const DWORD TITLE_SIZE = 1024;
WCHAR windowTitle[TITLE_SIZE];
GetWindowTextW(hwnd, windowTitle, TITLE_SIZE);
int length = ::GetWindowTextLength(hwnd);
std::wstring title(&windowTitle[0]);
if (!IsWindowVisible(hwnd) || length == 0 || title == L"Program Manager") {
return TRUE;
}
// Retrieve the pointer passed into this callback, and re-'type' it.
// The only way for a C API to pass arbitrary data is by means of a void*.
std::vector<std::wstring>& titles = *reinterpret_cast<std::vector<std::wstring>*>(lParam);
titles.push_back(title);
return TRUE;
}
cv::Mat hwnd2mat(HWND hwnd)
{
HDC hwindowDC, hwindowCompatibleDC;
int height, width, srcheight, srcwidth;
HBITMAP hbwindow;
cv::Mat src;
BITMAPINFOHEADER bi;
HBITMAP bi2;
hwindowDC = GetDC(hwnd);
hwindowCompatibleDC = CreateCompatibleDC(hwindowDC);
SetStretchBltMode(hwindowCompatibleDC, COLORONCOLOR);
RECT windowsize; // get the height and width of the screen
GetClientRect(hwnd, &windowsize);
srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = windowsize.bottom / 1; //change this to whatever size you want to resize to
width = windowsize.right / 1;
// create a bitmap
hbwindow = CreateCompatibleBitmap(hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER); //http://msdn.microsoft.com/en-us/library/windows/window/dd183402%28v=vs.85%29.aspx
bi.biWidth = width;
bi.biHeight = -height; //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 1;
bi.biYPelsPerMeter = 2;
bi.biClrUsed = 3;
bi.biClrImportant = 4;
// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt(hwindowCompatibleDC, 0, 0, width, height, hwindowDC, 0, 0, srcwidth, srcheight, SRCCOPY); //change SRCCOPY to NOTSRCCOPY for wacky colors !
src.create(height, width, CV_8UC4);
GetDIBits(hwindowCompatibleDC, hbwindow, 0, height, src.data, (BITMAPINFO*)&bi, DIB_RGB_COLORS); //copy from hwindowCompatibleDC to hbwindow
// avoid memory leak
DeleteObject(hbwindow);
DeleteDC(hwindowCompatibleDC);
ReleaseDC(hwnd, hwindowDC);
return src;
}
int main(int argc, char** argv)
{
std::vector<std::wstring> titles; // we use std::wstring in place of std::string. This is necessary so that the entire character set can be represented.
EnumWindows(windowNames, reinterpret_cast<LPARAM>(&titles));
HWND hwndDesktop = GetDesktopWindow();
size_t number = 0;
int i = 0;
for (const auto& title : titles)
std::wcout << L"Title: " << i++ << title << std::endl;
std::cin >> number;
HWND hwndWindow = FindWindow(NULL, titles[number].c_str());
cv::namedWindow("output", cv::WINDOW_NORMAL);
cv::Mat src = hwnd2mat(/*hwndDesktop*/hwndWindow);
cv::VideoWriter outputVideo("output.avi", cv::VideoWriter::fourcc('M', 'J', 'P', 'G'), 1, cv::Size(src.cols, src.rows));
outputVideo.write(src);
int key = 0;
while (key != 27)
{
src = hwnd2mat(hwndWindow);
outputVideo.write(src);
cv::imshow("output", src);
key = cv::waitKey(60); //press ESC to end
}
return 0;
}
The problem lies in the fact that we need to transfer the Mat from BGRA to BGR during the transfer of the frame to the VideoWriter object.
For correct operation, it is necessary to write
Mat bgrImg; cvtColor(src, bgrImg, COLOR_BGRA2BGR);
in the range before sending a frame and send bgrImg as a frame.
VideoWriter instances need to be closed using the release() method. that finalizes the video container file.
OpenCV has no support for modern screen capture AFAIK. You’ll need to use platform-specific means of doing this (maybe encapsulated in some library). The problem here is that the screen data is already in the GPU, and by using OpenCV you’re forcing it to be copied to the main memory and processed with a relatively slow CPU. It won’t perform well. Instead, a platform-specific approach will process the data on the GPU, using it to both extract the window’s frames and encode them. It’ll be very efficient both in terms of speed as well as energy consumption (you’ll vastly improve battery life while the capture is running, and will prevent the fans from being annoying on notebooks).

Anti aliasing in MFC

I'm trying to implement anti-aliasing in my MFC app, I'm using the technique described in this tutorial.
Create a bitmap (2x, 4x, 8x) the size of the original bitmap.
Draw on the resized bitmap (I'm only using simple figures (lines, circles and etc)).
Set StretchBlt Mode to HalfTone.
And Resize with StretchBlt to the original size.
Using this way, drawing in the resized bitmap it works, but I want to create a more generic function that receives a bitmap with the drawing already made and return with the anti-aliasing, I tried this:
static HBITMAP AntiAliasing(HBITMAP hBitmap)
{
int escala = 4;
HBITMAP bmp = __copia(hBitmap); // Copy the bitmap.
HDC hMemDC = CreateCompatibleDC(NULL);
HBITMAP bmpAntigo1 = (HBITMAP)::SelectObject(hMemDC, bmp);
BITMAP bitmap;
::GetObject(hBitmap, sizeof(BITMAP), &bitmap);
// Create a bitmap (2x, 4x, 8x) the size of the original bitmap.
HDC hDCDimensionado = ::CreateCompatibleDC(hMemDC);
HBITMAP bmpDimensionado = ::CreateCompatibleBitmap(hDCDimensionado,
bitmap.bmWidth * escala,
bitmap.bmHeight * escala);
HBITMAP hBmpVelho = (HBITMAP)::SelectObject(hDCDimensionado, bmpDimensionado);
// I also tried with {BLACKONWHITE, HALFTONE, WHITEONBLACK}
int oldStretchBltMode2 = ::SetStretchBltMode(hDCDimensionado, COLORONCOLOR);
// Resize the bitmap to the new size.
::StretchBlt(hDCDimensionado,
0, 0, bitmap.bmWidth * escala, bitmap.bmHeight * escala,
hMemDC,
0, 0, bitmap.bmWidth, bitmap.bmHeight,
SRCCOPY);
/*
* Here the bitmap has lost his colors and became black and white.
*/
::SetStretchBltMode(hDCDimensionado, oldStretchBltMode2);
// Set StretchBltMode to halfTone so can mimic the anti aliasing effect.
int oldStretchBltMode = ::SetStretchBltMode(hMemDC, HALFTONE);
// resize to the original size.
::StretchBlt(hMemDC,
0, 0, bitmap.bmWidth, bitmap.bmHeight,
hDCDimensionado,
0, 0, escala * bitmap.bmWidth, escala * bitmap.bmHeight,
SRCCOPY);
::SetStretchBltMode(hMemDC, oldStretchBltMode);
::SelectObject(hMemDC, bmpAntigo1);
::DeleteDC(hMemDC);
::SelectObject(hDCDimensionado, hBmpVelho);
DeleteDC(hDCDimensionado);
return bmp;
}
But this function doesn't work, the result loses its colors (all drawings became black) and there isn't anti aliasing.
Any help will be appreciated!
From documentation for CreateCompatibleBitmap:
Note: When a memory device context is created, it initially has a
1-by-1 monochrome bitmap selected into it. If this memory device
context is used in CreateCompatibleBitmap, the bitmap that is created
is a monochrome bitmap. To create a color bitmap, use the HDC that was
used to create the memory device context, as shown in the following
code:
Change the code and supply hdc for the desktop as show below:
HDC hdc = ::GetDC(0);
HBITMAP bmpDimensionado = ::CreateCompatibleBitmap(hdc, ...)
::ReleaseDC(0, hdc);
This will show the image, however this method will not produce the desired effect because it simply magnifies each pixel to larger size and reduces it back to the original pixel. There is no blending with neighboring pixels.
Use other methods such Direct2D with Gaussian blur effect, or use GDI+ instead with interpolation mode:
Gdiplus::GdiplusStartup...
void foo(HDC hdc)
{
Gdiplus::Bitmap bitmap(L"file.bmp");
if(bitmap.GetLastStatus() != 0)
return 0;
auto w = bitmap.GetWidth();
auto h = bitmap.GetHeight();
auto maxw = w * 2;
auto maxh = h * 2;
Gdiplus::Bitmap membmp(maxw, maxh);
Gdiplus::Graphics memgr(&membmp);
memgr.SetInterpolationMode(Gdiplus::InterpolationModeHighQualityBilinear);
memgr.DrawImage(&bitmap, 0, 0, maxw, maxh);
Gdiplus::Graphics gr(hdc);
gr.SetInterpolationMode(Gdiplus::InterpolationModeHighQualityBilinear);
gr.DrawImage(&membmp, 0, 0, w, h);
}
If target window is at least Vista, use GDI+ version 1.1 with blur effect. See also How to turn on GDI+ 1.1 in MFC project
#define GDIPVER 0x0110 //add this to precompiled header file
void blur(HDC hdc)
{
Gdiplus::Graphics graphics(hdc);
Gdiplus::Bitmap bitmap(L"file.bmp");
if(bitmap.GetLastStatus() != 0)
return;
Gdiplus::Blur blur;
Gdiplus::BlurParams blur_param;
blur_param.radius = 3; //change the radius for different result
blur_param.expandEdge = TRUE;
blur.SetParameters(&blur_param);
bitmap.ApplyEffect(&blur, NULL);
graphics.DrawImage(&bitmap, 0, 0);
}

Create a divx-encoded avi from frames using opencv

This question is similar to this one and particularly this one but my desired output is different. I'm trying to capture the desktop to video using opencv. The preferred output is an avi file with divx encoding. I'm new to opencv and bitmap programming in general.
As a first step, to make sure the divx codec is present, I create a single frame (cv::Mat) of a solid color (yellow) and write that 100 times to the video file, as shown here:
int main(int argc, char* argv[])
{
cv::Mat frame(1200, 1920, CV_8UC3, cv::Scalar(0, 50000, 50000));
cv::VideoWriter* videoWriter = new cv::VideoWriter(
"C:/videos/desktop.avi",
CV_FOURCC('D','I','V','3'),
5, cv::Size(1920, 1200), true);
int frameCount = 0;
while (frameCount < 100)
{
videoWriter->write(frame);
::Sleep(100);
frameCount++;
}
delete videoWriter;
return 0;
}
This works perfectly - the video file is created and can be played on my Win 10 machine with VLC, Windows Media Player or the Films&TV app. It's 100 frames of solid yellow, but it shows the video is being created properly.
Next step: replace the dummy cv::Mat frame in the code above with a series of screenshots of the desktop. I get a handle to the desktop window using GetDesktopWindow(), and then use the function hwnd2mat (taken from this SO question - thanks!) to convert the bitmap obtained from the desktop handle to a cv::Mat that I can write to my video.
I copied the hwnd2mat function verbatim except I don't scale the image - the desktop bitmap is already 1920x1200, and also the cv::Mat I create is CV_8UC3 instead of CV_8UC4 (CV_8UC4 causes my app to crash).
Here's the code, including a reprint of hwnd2mat:
int main(int argc, char* argv[])
{
cv::VideoWriter* videoWriter = new cv::VideoWriter(
"C:/videos/desktop.avi",
CV_FOURCC('D','I','V','3'),
5, Size(1920, 1200), true);
int frameCount = 0;
while (frameCount < 100)
{
HWND hDsktopWindow = ::GetDesktopWindow();
cv::Mat frame = hwnd2mat(hDsktopWindow);
videoWriter->write(frame);
::Sleep(100);
frameCount++;
}
delete videoWriter;
return 0;
}
cv::Mat hwnd2mat(HWND hwnd)
{
HDC hwindowDC, hwindowCompatibleDC;
int height, width, srcheight, srcwidth;
HBITMAP hbwindow;
cv::Mat src;
BITMAPINFOHEADER bi;
hwindowDC = GetDC(hwnd);
hwindowCompatibleDC = CreateCompatibleDC(hwindowDC);
SetStretchBltMode(hwindowCompatibleDC, COLORONCOLOR);
RECT windowsize; // get the height and width of the screen
GetClientRect(hwnd, &windowsize);
srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = windowsize.bottom / 1; //change this to whatever size you want to resize to
width = windowsize.right / 1;
src.create(height, width, CV_8UC3);
// create a bitmap
hbwindow = CreateCompatibleBitmap(hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER);
bi.biWidth = width;
bi.biHeight = -height; //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt(hwindowCompatibleDC, 0, 0, width, height, hwindowDC, 0, 0,srcwidth, srcheight, SRCCOPY);
GetDIBits(hwindowCompatibleDC, hbwindow, 0, height, src.data, (BITMAPINFO*)&bi, DIB_RGB_COLORS);
// avoid memory leak
DeleteObject(hbwindow); DeleteDC(hwindowCompatibleDC); ReleaseDC(hwnd,hwindowDC);
return src;
}
The result of this is that the video file is created and can be played without errors, but it's just solid grey. It seems like the bitmap of the desktop is not getting copied correctly into the cv::Mat frame. I've tried a zillion combinations of the values in the BITMAPINFOHEADER, but nothing works and I don't know what I'm doing to be honest. I know opencv has conversion functions but again, I don't even really know what I'm trying to convert to/from.
Any help appreciated!
Figured out a way to make it work - I have no idea if this is the best way, so comments or alternative solutions are still welcome.
It seems like for GetDIBits to work, the cv::Mat has to be 4-channel, i.e. CV_8UC4, like the original code of hwnd2mat before I changed it. If it is not CV_8UC4, no data is copied (GetDIBits returns 0 scan lines copied) and that's why my avi was just gray. So the first change was to create the src cv::Mat as 4-channel:
src.create(height, width, CV_8UC4);
But for the divx-encoded avi file that I'm trying to create, the frames should be 3-channel (don't ask me why). I added a call to an opencv conversion function after calling GetDIBits(), as follows:
cv::Mat dst;
dst.create(height, width, CV_8UC3);
cv::cvtColor(src, dst, CV_RGBA2RGB);
And then I return dst from hwnd2mat instead of src. The call to cvtColor removes the alpha channel (the A in RGBA) and dst ends up with just the R,G,B channels.
You can get bitmap with no alpha channel from GetDIBits and write it straight to cv::VideoWriter. Just change biBitCount to 24. Leave Mat format to CV_8IC3. This worked for me.
src.create(height, width, CV_8UC3);
bi.biBitCount = 24; // this is where to change

GDI Screenshot, results varying on different computer

I try to take a screenshot with GDI, then i use it in FFmpeg.
The screenshot works well and FFmpeg handle it without any problem.
But, on some computer, the image is not very what i want like you can see below.
Here is the code i use to init my bitmap :
//--
mImageBuffer = new unsigned char[mWxHxS];
memset(mImageBuffer, 0, mWxHxS);
//--
hScreenDC = GetDC(0);
hMemoryDC = CreateCompatibleDC(hScreenDC);
//--
bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bi.bmiHeader.biBitCount = 24;
bi.bmiHeader.biWidth = mWidth;
bi.bmiHeader.biHeight = mHeight;
bi.bmiHeader.biCompression = BI_RGB;
bi.bmiHeader.biPlanes = 1;
bi.bmiHeader.biClrUsed = 24;
bi.bmiHeader.biClrImportant = 256;
hBitmap = CreateDIBSection(hMemoryDC, &bi, DIB_RGB_COLORS, &mImageBuffer, 0, 0);
SelectObject(hMemoryDC, hBitmap);
And here for each screenshot :
if(BitBlt(
hMemoryDC,
0,
0,
mWidth,
mHeight,
hScreenDC,
mPx,
mPy,
SRCCOPY | CAPTUREBLT
))
I do not have any error running my application but this ugly image and only on some computer.
I don't know what is the difference causing that on these computer (All are Win7, Aero actived...).
I do not understand because my code follow all example i found...
Please help me !
You are creating a Device-Independent-Bitmap (CreateDIBSection) then using a Device Dependent Context (CreateCompatibleDC) to work with it. I believe you need to create a device dependent bitmap to be compatible with BitBlt, or use StretchDIBits to support device-independent image data. The reason this works on some computers and not others is because the video driver determines the format of a device-dependent image, which may or may not be the same as the Windows definition of a device-independent image.
Here is an example of capturing an image (yes, its unnecessarily long, but still seems to contain good info): https://msdn.microsoft.com/en-us/library/windows/desktop/dd183402(v=vs.85).aspx
And here is documentation on StretchDIBits, in case you require a DIB: https://msdn.microsoft.com/en-us/library/windows/desktop/dd145121(v=vs.85).aspx
So i finaly find the solution :
It seems that the BitBlt and StretchBlt doesnt really handle correctly the transfer between 32 to 24 bits on some computers...
Now, i use only 32bits with GDI and let FFmpeg with libswscale convert my RGBA image to a YUV format.
My changes :
mWidth = GetDeviceCaps(hScreenDC, HORZRES);
mHeight = GetDeviceCaps(hScreenDC, VERTRES);
mWxHxS = mWidth*mHeight*4;
bi.bmiHeader.biBitCount = 32;
hBitmap = CreateCompatibleBitmap(hScreenDC, mWidth, mHeight);
if(BitBlt(
hMemoryDC,
0,
0,
mWidth,
mHeight,
hScreenDC,
mPx,
mPy,
SRCCOPY | CAPTUREBLT
) && GetDIBits(hScreenDC, hBitmap, 0, mHeight, mImageBuffer, &bi, DIB_RGB_COLORS))
{
return true;
}
Thanks trying to help me !

StretchBlt only works when nHeightDest is negative

I'm trying to use StretchBlt in order to copy pixels from a memory hdc to the window hdc.
The memory hdc gets the image from an invisible window which renders a stream using openGL.
Here's my code:
BITMAPINFOHEADER createBitmapHeader(int width, int height) {
BITMAPINFOHEADER header;
header.biSize = sizeof(BITMAPINFOHEADER);
header.biWidth = width;
header.biHeight = height;
header.biPlanes = 1;
header.biBitCount = 32;
header.biCompression = BI_RGB;
header.biSizeImage = 0;
header.biXPelsPerMeter = 0;
header.biYPelsPerMeter = 0;
header.biClrUsed = 0;
header.biClrImportant = 0;
return header;
}
...
HDC memoryHdc = CreateCompatibleDC(windowHdc);
BITMAPINFO bitmapInfo;
bitmapInfo.bmiHeader = createBitmapHeader(targetDimensions.width, targetDimensions.height);
HBITMAP bitmap = CreateDIBitmap(windowHdc, &bitmapInfo.bmiHeader, CBM_INIT, offscreenBuffer, &bitmapInfo, DIB_RGB_COLORS);
SelectObject(memoryHdc, bitmap);
DeleteObject(bitmap);
SetStretchBltMode(windowHdc, COLORONCOLOR);
StretchBlt(windowHdc,
targetDimensions.x, targetDimensions.y,
targetDimensions.width, -targetDimensions.height,
memoryHdc,
sourceDimensions.x, sourceDimensions.y,
sourceDimensions.width, sourceDimensions.height,
SRCCOPY);
DeleteDC(memoryHdc);
Where windowHdc is the hdc of the window to which I want the StretchBlt to copy the pixels to, and offscreenBuffer is a void* to the pixels copied from the offscreen window in which the openGL is rendering.
This code works great, except that the image is upside down and I want it vertically flipped.
I know that this happens because:
If nHeightSrc and nHeightDest have different signs, the function
creates a mirror image of the bitmap along the y-axis
But when I remove the minus sign and both target and source heights are the same then I see no image in the window.
Just to check, I tried to put the minus on the sourceDimensions.height but that also results in no image, and the same if I try to negate the widths (both target and source).
Any idea why?
Thanks.