Programmatically grab screenshots in OSX - c++

I am going to port some screenshot grabbing code (C++) for linux to osx. The current solution run graphical applications in xvfb and then use xlib to grab screenshots from the display. (That will also support if we are running without xvfb).
So as I understood osx is moving away from X11 so my question is what to use besides xlib to implement it now ? I have found Quartz Display Services. Is that what makes sense to use now ? Will that work with xvfb ?

Yes, you will be able to call functions like CGDisplayCreateImage (documentation linked for you) by linking the Application Services framework to your C++ tool.

I have written an example for capturing the pc display screen and convert to opencv Mat.
#include <iostream>
#include <opencv2/opencv.hpp>
#include <unistd.h>
#include <stdio.h>
#include <ApplicationServices/ApplicationServices.h>
using namespace std;
using namespace cv;
int main (int argc, char * const argv[])
{
size_t width = CGDisplayPixelsWide(CGMainDisplayID());
size_t height = CGDisplayPixelsHigh(CGMainDisplayID());
Mat im(cv::Size(width,height), CV_8UC4);
Mat bgrim(cv::Size(width,height), CV_8UC3);
Mat resizedim(cv::Size(width,height), CV_8UC3);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef contextRef = CGBitmapContextCreate(
im.data, im.cols, im.rows,
8, im.step[0],
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
while (true)
{
CGImageRef imageRef = CGDisplayCreateImage(CGMainDisplayID());
CGContextDrawImage(contextRef,
CGRectMake(0, 0, width, height),
imageRef);
cvtColor(im, bgrim, CV_RGBA2BGR);
resize(bgrim, resizedim,cv::Size(),0.5,0.5);
imshow("test", resizedim);
cvWaitKey(10);
CGImageRelease(imageRef);
}
// CGContextRelease(contextRef);
// CGColorSpaceRelease(colorSpace);
return 0;
}
and then, the result is here.
I had expected my current display would be captured, but only the back wallpaper was captured actually.
What the CGMainDisplayID() refers would be a hint to this problem.
Anyway, I hope this may approach your goal a bit.

void captureScreen(){
CGImageRef image_ref = CGDisplayCreateImage(CGMainDisplayID());
CGDataProviderRef provider = CGImageGetDataProvider(image_ref);
CFDataRef dataref = CGDataProviderCopyData(provider);
size_t width, height; width = CGImageGetWidth(image_ref);
height = CGImageGetHeight(image_ref);
size_t bpp = CGImageGetBitsPerPixel(image_ref) / 8;
uint8 *pixels = malloc(width * height * bpp);
memcpy(pixels, CFDataGetBytePtr(dataref), width * height * bpp);
CFRelease(dataref);
CGImageRelease(image_ref);
FILE *stream = fopen("/Users/username/Desktop/screencap.raw", "w+");
fwrite(pixels, bpp, width * height, stream);
fclose(stream);
free(pixels);
}
or in C#:
// https://stackoverflow.com/questions/1537587/capture-screen-image-in-c-on-osx
// https://github.com/Acollie/C-Screenshot-OSX/blob/master/C%2B%2B-screenshot/C%2B%2B-screenshot/main.cpp
// https://github.com/ScreenshotMonitor/ScreenshotCapture/blob/master/src/Pranas.ScreenshotCapture/ScreenshotCapture.cs
// https://screenshotmonitor.com/blog/capturing-screenshots-in-net-and-mono/
namespace rtaStreamingServer
{
// https://github.com/xamarin/xamarin-macios
// https://qiita.com/shimshimkaz/items/18bcf4767143ea5897c7
public static class OSxScreenshot
{
private const string LIBCOREGRAPHICS = "/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics";
[System.Runtime.InteropServices.DllImport(LIBCOREGRAPHICS)]
private static extern System.IntPtr CGDisplayCreateImage(System.UInt32 displayId);
[System.Runtime.InteropServices.DllImport(LIBCOREGRAPHICS)]
private static extern void CFRelease(System.IntPtr handle);
public static void TestCapture()
{
Foundation.NSNumber mainScreen = (Foundation.NSNumber)AppKit.NSScreen.MainScreen.DeviceDescription["NSScreenNumber"];
using (CoreGraphics.CGImage cgImage = CreateImage(mainScreen.UInt32Value))
{
// https://stackoverflow.com/questions/17334786/get-pixel-from-the-screen-screenshot-in-max-osx/17343305#17343305
// Get byte-array from CGImage
// https://gist.github.com/zhangao0086/5fafb1e1c0b5d629eb76
AppKit.NSBitmapImageRep bitmapRep = new AppKit.NSBitmapImageRep(cgImage);
// var imageData = bitmapRep.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [:])
Foundation.NSData imageData = bitmapRep.RepresentationUsingTypeProperties(AppKit.NSBitmapImageFileType.Png);
long len = imageData.Length;
byte[] bytes = new byte[len];
System.Runtime.InteropServices.GCHandle pinnedArray = System.Runtime.InteropServices.GCHandle.Alloc(bytes, System.Runtime.InteropServices.GCHandleType.Pinned);
System.IntPtr pointer = pinnedArray.AddrOfPinnedObject();
// Do your stuff...
imageData.GetBytes(pointer, new System.IntPtr(len));
pinnedArray.Free();
using (AppKit.NSImage nsImage = new AppKit.NSImage(cgImage, new System.Drawing.SizeF(cgImage.Width, cgImage.Height)))
{
// ImageView.Image = nsImage;
// And now ? How to get the image bytes ?
// https://theconfuzedsourcecode.wordpress.com/2016/02/24/convert-android-bitmap-image-and-ios-uiimage-to-byte-array-in-xamarin/
// https://stackoverflow.com/questions/5645157/nsimage-from-byte-array
// https://stackoverflow.com/questions/53060723/nsimage-source-from-byte-array-cocoa-app-xamarin-c-sharp
// https://gist.github.com/zhangao0086/5fafb1e1c0b5d629eb76
// https://www.quora.com/What-is-a-way-to-convert-UIImage-to-a-byte-array-in-Swift?share=1
// https://stackoverflow.com/questions/17112314/converting-uiimage-to-byte-array
} // End Using nsImage
} // End Using cgImage
} // End Sub TestCapture
public static CoreGraphics.CGImage CreateImage(System.UInt32 displayId)
{
System.IntPtr handle = System.IntPtr.Zero;
try
{
handle = CGDisplayCreateImage(displayId);
return new CoreGraphics.CGImage(handle);
}
finally
{
if (handle != System.IntPtr.Zero)
{
CFRelease(handle);
}
}
} // End Sub CreateImage
} // End Class OSxScreenshot
} // End Namespace rtaStreamingServer

Related

How to get image with transparent background from FLTK Fl__Image__Surface?

I want to draw string or char (offscreen) and use it as Fl_Image or Fl_RGB_Image.
Based on this link I can do that easily with Fl__Image__Surface. The problem with Fl__Image__Surface is that it does not support transparency when I convert its output to image (Fl_RGB_Image) using image() method. So is there any way I can achieve this? I can do that on Java Swing with BufferedImage, also in Android with Canvas by creating Bitmap with Bitmap.Config.ARGB_8888.
If you prefer to do it manually, you can try the following:
#include <FL/Enumerations.H>
#include <FL/Fl.H>
#include <FL/Fl_Box.H>
#include <FL/Fl_Device.H>
#include <FL/Fl_Double_Window.H>
#include <FL/Fl_Image.H>
#include <FL/Fl_Image_Surface.H>
#include <FL/fl_draw.H>
#include <cassert>
#include <vector>
Fl_RGB_Image *get_image(int w, int h) {
// draw image on surface
auto img_surf = new Fl_Image_Surface(w, h);
Fl_Surface_Device::push_current(img_surf);
// We'll use white to mask 255, 255, 255, see the loop
fl_color(FL_WHITE);
fl_rectf(0, 0, w, h);
fl_color(FL_BLACK);
fl_font(FL_HELVETICA_BOLD, 20);
fl_draw("Hello", 100, 100);
auto image = img_surf->image();
delete img_surf;
Fl_Surface_Device::pop_current();
return image;
}
Fl_RGB_Image *get_transparent_image(const Fl_RGB_Image *image) {
assert(image);
// make image transparent
auto data = (const unsigned char*)(*image->data());
auto len = image->w() * image->h() * image->d(); // the depth is by default 3
std::vector<unsigned char> temp;
for (size_t i = 0; i < len; i++) {
if (i > 0 && i % 3 == 0) {
// check if the last 3 vals are the rgb values of white, add a 0 alpha
if (data[i] == 255 && data[i - 1] == 255 && data[i - 2] == 255)
temp.push_back(0);
else
// add a 255 alpha, making the black opaque
temp.push_back(255);
temp.push_back(data[i]);
} else {
temp.push_back(data[i]);
}
}
temp.push_back(0);
assert(temp.size() == image->w() * image->h() * 4);
auto new_image_data = new unsigned char[image->w() * image->h() * 4];
memcpy(new_image_data, temp.data(), image->w() * image->h() * 4);
auto new_image = new Fl_RGB_Image(new_image_data, image->w(), image->h(), 4); // account for alpha
return new_image;
}
int main() {
auto win = new Fl_Double_Window(400, 300);
auto box = new Fl_Box(0, 0, 400, 300);
win->end();
win->show();
auto image = get_image(box->w(), box->h());
auto transparent_image = get_transparent_image(image);
delete image;
box->image(transparent_image);
box->redraw();
return Fl::run();
}
The idea is that an Fl_Image_Surface gives an Fl_RGB_Image with 3 channels (r, g, b), no alpha. We manually add the alpha by creating a temporary vector, querying the data (can be optimized if you know the colors you're using by only checking data[i] == 255. The vector is an RAII type whose life ends at the end of the scope, so we just mempcy the data from the vector to a long-lived unsigned char array that we pass to an Fl_RGB_Image specifying the depth to be 4, accounting for alpha.
The other option is to use an external library like CImg (single header lib) to draw text into an image buffer and then pass that buffer to Fl_RGB_Image.

How to resize an image from an rgb buffer using c++

I have an (char*)RGB buffer that has the data of actual image. Let's say that the actual image resolution is 720x576. Now I want to resize it to a resolution , say 120x90.
How can I do this using https://code.google.com/p/jpeg-compressor/ or libjpeg ?
Note: can use any other library, but should work in linux.
Edited: Video decoder decodes a frame in YUV, which I convert it into RGB. All these happen in a buffer.
I need to resize the RGB buffer to make a thumbnail out of it with variable size.
Thanks for the help in advance
I did the following to achieve my goal:
#define TN_WIDTH 240
#define TN_HEIGHT 180
#include "jpegcompressor/jpge.h"
#include "jpegcompressor/jpgd.h"
#include <ippi.h>
bool createThumnailJpeg(const uint8* pSrc, int srcwidth, int srcheight)
{
int req_comps = 3;
jpge::params params;
params.m_quality = 50;
params.m_subsampling = jpge::H2V2;
params.m_two_pass_flag = false;
FILE *fpJPEGTN = fopen("Resource\\jpegcompressor.jpeg","wb");
int dstWidth = TN_WIDTH;
int dstHeight = TN_HEIGHT;
int uiDstBufferSize = dstWidth * dstHeight * 3;
uint8 *pDstRGBBuffer = new uint8[uiDstBufferSize];
uint8 *pJPEGTNBuffer = new uint8[uiDstBufferSize];
int uiSrcBufferSize = srcwidth * srcheight * 3;
IppiSize srcSize = {srcwidth , srcheight};
IppiRect srcROI = {0, 0, srcwidth, srcheight};
IppiSize dstROISize = {dstWidth, dstHeight};
double xfactor = (double) dstWidth / srcwidth;
double yfactor = (double) dstHeight / srcheight;
IppStatus status = ippiResize_8u_C3R(pSrc, srcSize, srcwidth*3, srcROI,
pDstRGBBuffer, dstWidth*3, dstROISize, xfactor, yfactor, 1);
if (!jpge::compress_image_to_jpeg_file_in_memory(pJPEGTNBuffer, uiDstBufferSize, dstWidth, dstHeight, req_comps, pDstRGBBuffer, params))
{
cout << "failed!";
delete[] pDstRGBBuffer;
delete [] pJPEGTNBuffer;
return false;
}
if (fpJPEGTN)
{
fwrite(pJPEGTNBuffer, uiDstBufferSize, 1, fpJPEGTN);
fclose(fpJPEGTN);
}
delete [] pDstRGBBuffer;
delete [] pJPEGTNBuffer;
return true;
}

BitmapSource copyPixels correct arguments

Used stride by me generates exception. I don't know what stride is correct. The input image is 32 bit JPG. Please tell me what values(I tried many things but they where generating exceptions or corrupted JPG) i should put into:
array<System::Byte>^ pixels = gcnew array<System::Byte>(WHAT VALUE);
bitmapSource->CopyPixels(pixels, WHAT VALUE, 0);
// Jpg.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <iostream>
#using <mscorlib.dll> //requires CLI
using namespace System;
using namespace System::IO;
using namespace System::Windows::Media::Imaging;
using namespace System::Windows::Media;
using namespace System::Windows::Controls;
using namespace std;
[System::STAThread]
int _tmain(int argc, _TCHAR* argv[])
{
// Open a Stream and decode a JPEG image
Stream^ imageStreamSource = gcnew FileStream("C:/heart2.jpg",
FileMode::Open, FileAccess::Read, FileShare::Read);
JpegBitmapDecoder^ decoder = gcnew JpegBitmapDecoder(
imageStreamSource, BitmapCreateOptions::PreservePixelFormat,
BitmapCacheOption::Default);
BitmapSource^ bitmapSource = decoder->Frames[0];//< --mamy bitmape
// Draw the Image
Image^ myImage = gcnew Image();
myImage->Source = bitmapSource;
myImage->Stretch = Stretch::None;
myImage->Margin = System::Windows::Thickness(20);
int width = bitmapSource->PixelWidth;
int height = bitmapSource->PixelHeight;
int stride = (width * bitmapSource->Format.BitsPerPixel + 31) / 32;
array<System::Byte>^ pixels
= gcnew array<System::Byte>(height * width * stride);
bitmapSource->CopyPixels(pixels, stride, 0);
int x;
cin >> x;
return 0;
}
Google
http://msdn.microsoft.com/en-us/library/system.drawing.imaging.bitmapdata.stride.aspx
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary.
So the correct value depends on how many bits per pixel you have in your image.

Invalid operation while gcnew Image CLI/C++

I get weird information about invalidOperationException in PresentationCore.dll while constructing an image by gcnew Image().
I attach project and the JPG file (which can be put in C:\) It actually cannot be checked other way because configuration of project (references) took a long time, and just copied code will not work.
http://www.speedyshare.com/Vrr84/Jpg.zip
How can I solve that problem?
// Jpg.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#using <mscorlib.dll> //requires CLI
using namespace System;
using namespace System::IO;
using namespace System::Windows::Media::Imaging;
using namespace System::Windows::Media;
using namespace System::Windows::Controls;
int _tmain(int argc, _TCHAR* argv[])
{
// Open a Stream and decode a JPEG image
Stream^ imageStreamSource = gcnew FileStream("C:/heart.jpg", FileMode::Open, FileAccess::Read, FileShare::Read);
JpegBitmapDecoder^ decoder = gcnew JpegBitmapDecoder(imageStreamSource, BitmapCreateOptions::PreservePixelFormat, BitmapCacheOption::Default);
BitmapSource^ bitmapSource = decoder->Frames[0];//< --mamy bitmape
// Draw the Image
Image^ myImage = gcnew Image();//<----------- ERROR
myImage->Source = bitmapSource;
myImage->Stretch = Stretch::None;
myImage->Margin = System::Windows::Thickness(20);
//
int width = 128;
int height = width;
int stride = width / 8;
array<System::Byte>^ pixels = gcnew array<System::Byte>(height * stride);
// Define the image paletteo
BitmapPalette^ myPalette = BitmapPalettes::Halftone256;
// Creates a new empty image with the pre-defined palette.
BitmapSource^ image = BitmapSource::Create(
width, height,
96, 96,
PixelFormats::Indexed1,
myPalette,
pixels,
stride);
System::IO::FileStream^ stream = gcnew System::IO::FileStream("new.jpg", FileMode::Create);
JpegBitmapEncoder^ encoder = gcnew JpegBitmapEncoder();
TextBlock^ myTextBlock = gcnew System::Windows::Controls::TextBlock();
myTextBlock->Text = "Codec Author is: " + encoder->CodecInfo->Author->ToString();
encoder->FlipHorizontal = true;
encoder->FlipVertical = false;
encoder->QualityLevel = 30;
encoder->Rotation = Rotation::Rotate90;
encoder->Frames->Add(BitmapFrame::Create(image));
encoder->Save(stream);
return 0;
}
The core issue is there:
The calling thread must be STA [...]
Your main thread must be marked as a single-threaded apartment (STA for short) for WPF to function correctly. The fix? Add [System::STAThread] to your _tmain, thus informing the runtime that the main thead has to be STA.
[System::STAThread]
int _tmain(int argc, _TCHAR* argv[])
{
// the rest of your code doesn't change
}

Convert Leptonica Pix Object to QPixmap ( or other image object )

I'm using the Leptonica Library to process some pictures. After that I want to show them in my QT GUI. Leptonica is using their own format Pix for the images, while QT is using their own format QPixmap. At the moment the only way for me is to save the pictures after processing as a file ( like bmp ) and then load them again with a QT function call. Now I want to convert them in my code, so I dont need to take the detour with saving them on the filesystem. Any ideas how to do this?
Best Regards
// edit:
Okay as already suggested I tried to convert the PIX* to a QImage.
The PIX* is defined like this:
http://tpgit.github.com/Leptonica/pix_8h_source.html
struct Pix
{
l_uint32 w; /* width in pixels */
l_uint32 h; /* height in pixels */
l_uint32 d; /* depth in bits */
l_uint32 wpl; /* 32-bit words/line */
l_uint32 refcount; /* reference count (1 if no clones) */
l_int32 xres; /* image res (ppi) in x direction */
/* (use 0 if unknown) */
l_int32 yres; /* image res (ppi) in y direction */
/* (use 0 if unknown) */
l_int32 informat; /* input file format, IFF_* */
char *text; /* text string associated with pix */
struct PixColormap *colormap; /* colormap (may be null) */
l_uint32 *data; /* the image data */
};
while QImage offers me a method like this:
http://developer.qt.nokia.com/doc/qt-4.8/qimage.html#QImage-7
QImage ( const uchar * data,
int width,
int height,
int bytesPerLine,
Format format )
I assume I cant just copy the data from the PIX to the QImage when calling the constructor. I guess I need to fill the QImage Pixel by Pixel, but actually I dont know how? Do I need to loop through all the coordinates? How do I regard the bit depth? Any ideas here?
I use this for conversion QImage to PIX:
PIX* TessTools::qImage2PIX(QImage& qImage) {
PIX * pixs;
l_uint32 *lines;
qImage = qImage.rgbSwapped();
int width = qImage.width();
int height = qImage.height();
int depth = qImage.depth();
int wpl = qImage.bytesPerLine() / 4;
pixs = pixCreate(width, height, depth);
pixSetWpl(pixs, wpl);
pixSetColormap(pixs, NULL);
l_uint32 *datas = pixs->data;
for (int y = 0; y < height; y++) {
lines = datas + y * wpl;
QByteArray a((const char*)qImage.scanLine(y), qImage.bytesPerLine());
for (int j = 0; j < a.size(); j++) {
*((l_uint8 *)lines + j) = a[j];
}
}
return pixEndianByteSwapNew(pixs);
}
And this for conversion PIX to QImage:
QImage TessTools::PIX2QImage(PIX *pixImage) {
int width = pixGetWidth(pixImage);
int height = pixGetHeight(pixImage);
int depth = pixGetDepth(pixImage);
int bytesPerLine = pixGetWpl(pixImage) * 4;
l_uint32 * s_data = pixGetData(pixEndianByteSwapNew(pixImage));
QImage::Format format;
if (depth == 1)
format = QImage::Format_Mono;
else if (depth == 8)
format = QImage::Format_Indexed8;
else
format = QImage::Format_RGB32;
QImage result((uchar*)s_data, width, height, bytesPerLine, format);
// Handle pallete
QVector<QRgb> _bwCT;
_bwCT.append(qRgb(255,255,255));
_bwCT.append(qRgb(0,0,0));
QVector<QRgb> _grayscaleCT(256);
for (int i = 0; i < 256; i++) {
_grayscaleCT.append(qRgb(i, i, i));
}
if (depth == 1) {
result.setColorTable(_bwCT);
} else if (depth == 8) {
result.setColorTable(_grayscaleCT);
} else {
result.setColorTable(_grayscaleCT);
}
if (result.isNull()) {
static QImage none(0,0,QImage::Format_Invalid);
qDebug() << "***Invalid format!!!";
return none;
}
return result.rgbSwapped();
}
This code accepts a const QImage& parameter.
static PIX* makePIXFromQImage(const QImage &image)
{
QByteArray ba;
QBuffer buf(&ba);
buf.open(QIODevice::WriteOnly);
image.save(&buf, "BMP");
return pixReadMemBmp(ba.constData(), ba.size());
}
I do not know the Leptonica Library, but I had a short look at the documentation and found the documentation about the PIX structure. You can create a QImage from the raw data and convert this to a QPixmap with convertFromImage.
Well I could solve the problem this way:
Leptonica offers a function
l_int32 pixWriteMemBmp (l_uint8 **pdata, size_t *psize, PIX *pix)
With this function you can write into the memory instead of a filestream. Still ( in this example ) the Bmp Header and format persists ( there are the same functions for other image formats too ).
The corresponding function from QT is this one:
bool QImage::loadFromData ( const uchar * data, int len, const char * format = 0 )
Since the the Header persits I just need to pass the data ptr and the size to the loadFromData function and QT does the rest.
So all together it would be like this:
PIX *m_pix;
FILE * pFile;
pFile = fopen( "PathToFile", "r" );
m_pix = pixReadStreamBmp(pFile); // If other file format use the according function
fclose(pFile);
// Now we have a Pix object from leptonica
l_uint8* ptr_memory;
size_t len;
pixWriteMemBmp(&ptr_memory, &size, m_pix);
// Now we have the picture somewhere in the memory
QImage testimage;
QPixmap pixmap;
testimage.loadFromData((uchar *)ptr_memory,len);
pixmap.convertFromImage(testimage);
// Now we have the image as a pixmap in Qt
This actually works for me, tho I don't know if there is a way to do this backwards so easy. ( If there is, please let me know )
Best Regards
You can save your pixmap to RAM instead of file (use QByteArray to store the data, and QBuffer as your I/O device).