I want to draw string or char (offscreen) and use it as Fl_Image or Fl_RGB_Image.
Based on this link I can do that easily with Fl__Image__Surface. The problem with Fl__Image__Surface is that it does not support transparency when I convert its output to image (Fl_RGB_Image) using image() method. So is there any way I can achieve this? I can do that on Java Swing with BufferedImage, also in Android with Canvas by creating Bitmap with Bitmap.Config.ARGB_8888.
If you prefer to do it manually, you can try the following:
#include <FL/Enumerations.H>
#include <FL/Fl.H>
#include <FL/Fl_Box.H>
#include <FL/Fl_Device.H>
#include <FL/Fl_Double_Window.H>
#include <FL/Fl_Image.H>
#include <FL/Fl_Image_Surface.H>
#include <FL/fl_draw.H>
#include <cassert>
#include <vector>
Fl_RGB_Image *get_image(int w, int h) {
// draw image on surface
auto img_surf = new Fl_Image_Surface(w, h);
Fl_Surface_Device::push_current(img_surf);
// We'll use white to mask 255, 255, 255, see the loop
fl_color(FL_WHITE);
fl_rectf(0, 0, w, h);
fl_color(FL_BLACK);
fl_font(FL_HELVETICA_BOLD, 20);
fl_draw("Hello", 100, 100);
auto image = img_surf->image();
delete img_surf;
Fl_Surface_Device::pop_current();
return image;
}
Fl_RGB_Image *get_transparent_image(const Fl_RGB_Image *image) {
assert(image);
// make image transparent
auto data = (const unsigned char*)(*image->data());
auto len = image->w() * image->h() * image->d(); // the depth is by default 3
std::vector<unsigned char> temp;
for (size_t i = 0; i < len; i++) {
if (i > 0 && i % 3 == 0) {
// check if the last 3 vals are the rgb values of white, add a 0 alpha
if (data[i] == 255 && data[i - 1] == 255 && data[i - 2] == 255)
temp.push_back(0);
else
// add a 255 alpha, making the black opaque
temp.push_back(255);
temp.push_back(data[i]);
} else {
temp.push_back(data[i]);
}
}
temp.push_back(0);
assert(temp.size() == image->w() * image->h() * 4);
auto new_image_data = new unsigned char[image->w() * image->h() * 4];
memcpy(new_image_data, temp.data(), image->w() * image->h() * 4);
auto new_image = new Fl_RGB_Image(new_image_data, image->w(), image->h(), 4); // account for alpha
return new_image;
}
int main() {
auto win = new Fl_Double_Window(400, 300);
auto box = new Fl_Box(0, 0, 400, 300);
win->end();
win->show();
auto image = get_image(box->w(), box->h());
auto transparent_image = get_transparent_image(image);
delete image;
box->image(transparent_image);
box->redraw();
return Fl::run();
}
The idea is that an Fl_Image_Surface gives an Fl_RGB_Image with 3 channels (r, g, b), no alpha. We manually add the alpha by creating a temporary vector, querying the data (can be optimized if you know the colors you're using by only checking data[i] == 255. The vector is an RAII type whose life ends at the end of the scope, so we just mempcy the data from the vector to a long-lived unsigned char array that we pass to an Fl_RGB_Image specifying the depth to be 4, accounting for alpha.
The other option is to use an external library like CImg (single header lib) to draw text into an image buffer and then pass that buffer to Fl_RGB_Image.
So i found this link regarding my question, but it is for c#
Create a PNG from an array of bytes
I have a variable int array of numbers.
i will call it "pix[ ]"
for now it can be any size from 3 to 256, later possibly bigger.
What i want to do now, is to convert it into a pixel image.
I am still a noobin c++ so pleas excuse me.
I tried to download some libaries that make working with libpng easier, but they do not seem to be working (ubuntu, code::blocks)
So i have questions in the following:
1) how do you create a new bitmap (which libaries, which command)?
2) how do i fill it with information from "pix[ ]" ?
3) how do i save it?
if it is a repost of a question i am happy about a link also ;)
Here is what i worked out so far, thanks for your help.
int main(){
FILE *imageFile;
int x,y,pixel,height=2,width=3;
imageFile=fopen("image.pgm","wb");
if(imageFile==NULL){
perror("ERROR: Cannot open output file");
exit(EXIT_FAILURE);
}
fprintf(imageFile,"P3\n"); // P3 filetype
fprintf(imageFile,"%d %d\n",width,height); // dimensions
fprintf(imageFile,"255\n"); // Max pixel
int pix[100] {200,200,200, 100,100,100, 0,0,0, 255,0,0, 0,255,0, 0,0,255};
fwrite(pix,1,18,imageFile);
fclose(imageFile);
}
i have not fully understood what it does. i can open the output image, but it is not a correct representation of the Array.
If i change things around, for example making a 2 dimensional array, then the image viewer tells me "expected an integer" and doesn't show me an image.
So far so good.
As i have the array before the image i created a function aufrunden to round up to the next int number because i want to create a square image.
int aufrunden (double h)
{
int i =h;
if (h-i == 0)
{
return i;
}
else
{
i = h+1;
return i;
}
}
This function is used in the creation of the image.
If the image is bigger than the information the array provides like this (a is the length of th array)
double h;
h= sqrt(a/3.0);
int i = aufrunden(h);
FILE *imageFile;
int height=i,width=i;
It might happen now, that the array is a=24 long. aufrunden makes the image 3x3 so it has 27 values...meaning it is missing the values for 1 pixel.
Or worse it is only a=23 long. also creating a 3x3 image.
What will fwrite(pix,1,18,imageFile); write in those pixels for information? It would be best if the remaing values are just 0.
*edit never mind, i will just add 0 to the end of the array until it is filling up the whole square...sorry
Consider using a Netpbm format (pbm, pgm, or ppm).
These images are extremely simple text files that you can write without any special libraries. Then use some third-party software such as ImageMagick, GraphicsMagick, or pnmtopng to convert your image to PNG format. Here is a wiki article describing the Netpbm format.
Here's a simple PPM image:
P3 2 3 255
0 0 0 255 255 255
255 0 0 0 255 255
100 100 100 200 200 200
The first line contains "P3" (the "magic number identifying it as a text-PPM), 2 (width), 3 (height), 255 (maximum intensity).
The second line contains the two RGB pixels for the top row.
The third and fourth lines each contain the two RGB pixels for rows 2 and 3.
Use a larger number for maximum intensity (e.g. 1024) if you need a larger range of intensities, up to 65535.
Edited by Mark Setchell beyond this point - so I am the guilty party!
The image looks like this (when the six pixels are enlarged):
The ImageMagick command to convert, and enlarge, is like this:
convert image.ppm -scale 400x result.png
If ImageMagick is a bit heavyweight, or difficult to install you can more simply use the NetPBM tools (from here) like this (it's a single precompiled binary)
pnmtopng image.ppm > result.png
If, as it seems, you have got Magick++ and are happy to use that, you can write your code in C/C++ like this:
////////////////////////////////////////////////////////////////////////////////
// sample.cpp
// Mark Setchell
//
// ImageMagick Magick++ sample code
//
// Compile with:
// g++ sample.cpp -o sample $(Magick++-config --cppflags --cxxflags --ldflags --libs)
////////////////////////////////////////////////////////////////////////////////
#include <Magick++.h>
#include <iostream>
using namespace std;
using namespace Magick;
int main(int argc,char **argv)
{
unsigned char pix[]={200,200,200, 100,100,100, 0,0,0, 255,0,0, 0,255,0, 0,0,255};
// Initialise ImageMagick library
InitializeMagick(*argv);
// Create Image object and read in from pixel data above
Image image;
image.read(2,3,"RGB",CharPixel,pix);
// Write the image to a file - change extension if you want a GIF or JPEG
image.write("result.png");
}
You are not far off - well done for trying! As far as I can see, you only had a couple of mistakes:
You had P3 where you would actually need P6 if writing in binary.
You were using int type for your data, whereas you need to be using unsigned char for 8-bit data.
You had the width and height interchanged.
You were using the PGM extension which is for Portable Grey Maps, whereas your data is colour, so you need to use the PPM extension which is for Portable Pix Map.
So, the working code looks like this:
#include <stdio.h>
#include <stdlib.h>
int main(){
FILE *imageFile;
int x,y,pixel,height=3,width=2;
imageFile=fopen("image.ppm","wb");
if(imageFile==NULL){
perror("ERROR: Cannot open output file");
exit(EXIT_FAILURE);
}
fprintf(imageFile,"P6\n"); // P6 filetype
fprintf(imageFile,"%d %d\n",width,height); // dimensions
fprintf(imageFile,"255\n"); // Max pixel
unsigned char pix[]={200,200,200, 100,100,100, 0,0,0, 255,0,0, 0,255,0, 0,0,255};
fwrite(pix,1,18,imageFile);
fclose(imageFile);
}
If you then run that, you can convert the resulting image to a nice big PNG with
convert image.ppm -scale 400x result.png
If you subsequently need 16-bit data, you would change the 255 to 65535, and store in an unsigned short array rather than unsigned char and when you come to the fwrite(), you would need to write double the number of bytes.
The code below will take an integer array of pixel colors as input and write it to a .bmp bitmap file or, in reverse, read a .bmp bitmap file and store its image contents as an int array. It only requires the <fstream> library. The input parameter path can be for example C:/path/to/your/image.bmp and data is formatted as data[x+y*width]=(red<<16)|(green<<8)|blue;, whereby red, green and blue are integers in the range 0-255 and the pixel position is (x,y).
#include <string>
#include <fstream>
using namespace std;
typedef unsigned int uint;
int* read_bmp(const string path, uint& width, uint& height) {
ifstream file(path, ios::in|ios::binary);
if(file.fail()) println("\rError: File \""+filename+"\" does not exist!");
uint w=0, h=0;
char header[54];
file.read(header, 54);
for(uint i=0; i<4; i++) {
w |= (header[18+i]&255)<<(8*i);
h |= (header[22+i]&255)<<(8*i);
}
const int pad=(4-(3*w)%4)%4, imgsize=(3*w+pad)*h;
char* img = new char[imgsize];
file.read(img, imgsize);
file.close();
int* data = new int[w*h];
for(uint y=0; y<h; y++) {
for(uint x=0; x<w; x++) {
const int i = 3*x+y*(3*w+pad);
data[x+(h-1-y)*w] = (img[i]&255)|(img[i+1]&255)<<8|(img[i+2]&255)<<16;
}
}
delete[] img;
width = w;
height = h;
return data;
}
void write_bmp(const string path, const uint width, const uint height, const int* const data) {
const int pad=(4-(3*width)%4)%4, filesize=54+(3*width+pad)*height; // horizontal line must be a multiple of 4 bytes long, header is 54 bytes
char header[54] = { 'B','M', 0,0,0,0, 0,0,0,0, 54,0,0,0, 40,0,0,0, 0,0,0,0, 0,0,0,0, 1,0,24,0 };
for(uint i=0; i<4; i++) {
header[ 2+i] = (char)((filesize>>(8*i))&255);
header[18+i] = (char)((width >>(8*i))&255);
header[22+i] = (char)((height >>(8*i))&255);
}
char* img = new char[filesize];
for(uint i=0; i<54; i++) img[i] = header[i];
for(uint y=0; y<height; y++) {
for(uint x=0; x<width; x++) {
const int color = data[x+(height-1-y)*width];
const int i = 54+3*x+y*(3*width+pad);
img[i ] = (char)( color &255);
img[i+1] = (char)((color>> 8)&255);
img[i+2] = (char)((color>>16)&255);
}
for(uint p=0; p<pad; p++) img[54+(3*width+p)+y*(3*width+pad)] = 0;
}
ofstream file(path, ios::out|ios::binary);
file.write(img, filesize);
file.close();
delete[] img;
}
The code snippet was inspired by https://stackoverflow.com/a/47785639/9178992
For .png images, use lodepng.cpp and lodepng.h:
#include <string>
#include <vector>
#include <fstream>
#include "lodepng.h"
using namespace std;
typedef unsigned int uint;
int* read_png(const string path, uint& width, uint& height) {
vector<uchar> img;
lodepng::decode(img, width, height, path, LCT_RGB);
int* data = new int[width*height];
for(uint i=0; i<width*height; i++) {
data[i] = img[3*i]<<16|img[3*i+1]<<8|img[3*i+2];
}
return data;
}
void write_png(const string path, const uint width, const uint height, const int* const data) {
uchar* img = new uchar[3*width*height];
for(uint i=0; i<width*height; i++) {
const int color = data[i];
img[3*i ] = (color>>16)&255;
img[3*i+1] = (color>> 8)&255;
img[3*i+2] = color &255;
}
lodepng::encode(path, img, width, height, LCT_RGB);
delete[] img;
}
As in the subject how to include JpegBitmapDecoder^ in C++/CLI project. I have tried inlude namespace but I get:
Error 1 error C3083: 'Windows': the symbol to the left of a '::' must be a type C:\Users\Duke\Documents\Visual Studio 2010\Projects\Jpg\Jpg\Jpg.cpp 8 1 Jpg
Error 2 error C3083: 'Media': the symbol to the left of a '::' must be a type C:\Users\Duke\Documents\Visual Studio 2010\Projects\Jpg\Jpg\Jpg.cpp 8 1 Jpg
Error 3 error C2039: 'Imaging' : is not a member of 'System' C:\Users\Duke\Documents\Visual Studio 2010\Projects\Jpg\Jpg\Jpg.cpp 8 1 Jpg
Error 4 error C2871: 'Imaging' : a namespace with this name does not exist C:\Users\Duke\Documents\Visual Studio 2010\Projects\Jpg\Jpg\Jpg.cpp 8 1 Jpg
Jpg
// Jpg.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#using <mscorlib.dll> //requires CLI
using namespace System;
using namespace System::IO;
using namespace System::Windows::Media::Imaging;
int _tmain(int argc, _TCHAR* argv[])
{
// Open a Stream and decode a JPEG image
Stream^ imageStreamSource = gcnew FileStream("C:\Users\Duke\Desktop\heart.jpg", FileMode::Open, FileAccess::Read, FileShare::Read);
JpegBitmapDecoder^ decoder = gcnew JpegBitmapDecoder(imageStreamSource, BitmapCreateOptions::PreservePixelFormat, BitmapCacheOption::Default);//i want that decoder
BitmapSource^ bitmapSource = decoder->Frames[0];//< --mamy bitmape
// Draw the Image
Image^ myImage = gcnew Image();
myImage->Source = bitmapSource;
myImage->Stretch = Stretch::None;
myImage->Margin = System::Windows::Thickness(20);
//
int width = 128;
int height = width;
int stride = width / 8;
array<System::Byte>^ pixels = gcnew array<System::Byte>(height * stride);
// Define the image palette
BitmapPalette^ myPalette = BitmapPalettes::Halftone256;
// Creates a new empty image with the pre-defined palette.
BitmapSource^ image = BitmapSource::Create(
width, height,
96, 96,
PixelFormats::Indexed1,
myPalette,
pixels,
stride);
System::IO::FileStream^ stream = gcnew System::IO::FileStream("new.jpg", FileMode::Create);
JpegBitmapEncoder^ encoder = gcnew JpegBitmapEncoder();
TextBlock^ myTextBlock = gcnew System::Windows::Controls::TextBlock();
myTextBlock->Text = "Codec Author is: " + encoder->CodecInfo->Author->ToString();
encoder->FlipHorizontal = true;
encoder->FlipVertical = false;
encoder->QualityLevel = 30;
encoder->Rotation = Rotation::Rotate90;
encoder->Frames->Add(BitmapFrame::Create(image));
encoder->Save(stream);
return 0;
}
Use this instead:
using namespace System::Windows::Media::Imaging;
Clearly, the line above that should have hinted that in C++/CLI, you use :: rather than ..
I get weird information about invalidOperationException in PresentationCore.dll while constructing an image by gcnew Image().
I attach project and the JPG file (which can be put in C:\) It actually cannot be checked other way because configuration of project (references) took a long time, and just copied code will not work.
http://www.speedyshare.com/Vrr84/Jpg.zip
How can I solve that problem?
// Jpg.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#using <mscorlib.dll> //requires CLI
using namespace System;
using namespace System::IO;
using namespace System::Windows::Media::Imaging;
using namespace System::Windows::Media;
using namespace System::Windows::Controls;
int _tmain(int argc, _TCHAR* argv[])
{
// Open a Stream and decode a JPEG image
Stream^ imageStreamSource = gcnew FileStream("C:/heart.jpg", FileMode::Open, FileAccess::Read, FileShare::Read);
JpegBitmapDecoder^ decoder = gcnew JpegBitmapDecoder(imageStreamSource, BitmapCreateOptions::PreservePixelFormat, BitmapCacheOption::Default);
BitmapSource^ bitmapSource = decoder->Frames[0];//< --mamy bitmape
// Draw the Image
Image^ myImage = gcnew Image();//<----------- ERROR
myImage->Source = bitmapSource;
myImage->Stretch = Stretch::None;
myImage->Margin = System::Windows::Thickness(20);
//
int width = 128;
int height = width;
int stride = width / 8;
array<System::Byte>^ pixels = gcnew array<System::Byte>(height * stride);
// Define the image paletteo
BitmapPalette^ myPalette = BitmapPalettes::Halftone256;
// Creates a new empty image with the pre-defined palette.
BitmapSource^ image = BitmapSource::Create(
width, height,
96, 96,
PixelFormats::Indexed1,
myPalette,
pixels,
stride);
System::IO::FileStream^ stream = gcnew System::IO::FileStream("new.jpg", FileMode::Create);
JpegBitmapEncoder^ encoder = gcnew JpegBitmapEncoder();
TextBlock^ myTextBlock = gcnew System::Windows::Controls::TextBlock();
myTextBlock->Text = "Codec Author is: " + encoder->CodecInfo->Author->ToString();
encoder->FlipHorizontal = true;
encoder->FlipVertical = false;
encoder->QualityLevel = 30;
encoder->Rotation = Rotation::Rotate90;
encoder->Frames->Add(BitmapFrame::Create(image));
encoder->Save(stream);
return 0;
}
The core issue is there:
The calling thread must be STA [...]
Your main thread must be marked as a single-threaded apartment (STA for short) for WPF to function correctly. The fix? Add [System::STAThread] to your _tmain, thus informing the runtime that the main thead has to be STA.
[System::STAThread]
int _tmain(int argc, _TCHAR* argv[])
{
// the rest of your code doesn't change
}
I am going to port some screenshot grabbing code (C++) for linux to osx. The current solution run graphical applications in xvfb and then use xlib to grab screenshots from the display. (That will also support if we are running without xvfb).
So as I understood osx is moving away from X11 so my question is what to use besides xlib to implement it now ? I have found Quartz Display Services. Is that what makes sense to use now ? Will that work with xvfb ?
Yes, you will be able to call functions like CGDisplayCreateImage (documentation linked for you) by linking the Application Services framework to your C++ tool.
I have written an example for capturing the pc display screen and convert to opencv Mat.
#include <iostream>
#include <opencv2/opencv.hpp>
#include <unistd.h>
#include <stdio.h>
#include <ApplicationServices/ApplicationServices.h>
using namespace std;
using namespace cv;
int main (int argc, char * const argv[])
{
size_t width = CGDisplayPixelsWide(CGMainDisplayID());
size_t height = CGDisplayPixelsHigh(CGMainDisplayID());
Mat im(cv::Size(width,height), CV_8UC4);
Mat bgrim(cv::Size(width,height), CV_8UC3);
Mat resizedim(cv::Size(width,height), CV_8UC3);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef contextRef = CGBitmapContextCreate(
im.data, im.cols, im.rows,
8, im.step[0],
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
while (true)
{
CGImageRef imageRef = CGDisplayCreateImage(CGMainDisplayID());
CGContextDrawImage(contextRef,
CGRectMake(0, 0, width, height),
imageRef);
cvtColor(im, bgrim, CV_RGBA2BGR);
resize(bgrim, resizedim,cv::Size(),0.5,0.5);
imshow("test", resizedim);
cvWaitKey(10);
CGImageRelease(imageRef);
}
// CGContextRelease(contextRef);
// CGColorSpaceRelease(colorSpace);
return 0;
}
and then, the result is here.
I had expected my current display would be captured, but only the back wallpaper was captured actually.
What the CGMainDisplayID() refers would be a hint to this problem.
Anyway, I hope this may approach your goal a bit.
void captureScreen(){
CGImageRef image_ref = CGDisplayCreateImage(CGMainDisplayID());
CGDataProviderRef provider = CGImageGetDataProvider(image_ref);
CFDataRef dataref = CGDataProviderCopyData(provider);
size_t width, height; width = CGImageGetWidth(image_ref);
height = CGImageGetHeight(image_ref);
size_t bpp = CGImageGetBitsPerPixel(image_ref) / 8;
uint8 *pixels = malloc(width * height * bpp);
memcpy(pixels, CFDataGetBytePtr(dataref), width * height * bpp);
CFRelease(dataref);
CGImageRelease(image_ref);
FILE *stream = fopen("/Users/username/Desktop/screencap.raw", "w+");
fwrite(pixels, bpp, width * height, stream);
fclose(stream);
free(pixels);
}
or in C#:
// https://stackoverflow.com/questions/1537587/capture-screen-image-in-c-on-osx
// https://github.com/Acollie/C-Screenshot-OSX/blob/master/C%2B%2B-screenshot/C%2B%2B-screenshot/main.cpp
// https://github.com/ScreenshotMonitor/ScreenshotCapture/blob/master/src/Pranas.ScreenshotCapture/ScreenshotCapture.cs
// https://screenshotmonitor.com/blog/capturing-screenshots-in-net-and-mono/
namespace rtaStreamingServer
{
// https://github.com/xamarin/xamarin-macios
// https://qiita.com/shimshimkaz/items/18bcf4767143ea5897c7
public static class OSxScreenshot
{
private const string LIBCOREGRAPHICS = "/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics";
[System.Runtime.InteropServices.DllImport(LIBCOREGRAPHICS)]
private static extern System.IntPtr CGDisplayCreateImage(System.UInt32 displayId);
[System.Runtime.InteropServices.DllImport(LIBCOREGRAPHICS)]
private static extern void CFRelease(System.IntPtr handle);
public static void TestCapture()
{
Foundation.NSNumber mainScreen = (Foundation.NSNumber)AppKit.NSScreen.MainScreen.DeviceDescription["NSScreenNumber"];
using (CoreGraphics.CGImage cgImage = CreateImage(mainScreen.UInt32Value))
{
// https://stackoverflow.com/questions/17334786/get-pixel-from-the-screen-screenshot-in-max-osx/17343305#17343305
// Get byte-array from CGImage
// https://gist.github.com/zhangao0086/5fafb1e1c0b5d629eb76
AppKit.NSBitmapImageRep bitmapRep = new AppKit.NSBitmapImageRep(cgImage);
// var imageData = bitmapRep.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [:])
Foundation.NSData imageData = bitmapRep.RepresentationUsingTypeProperties(AppKit.NSBitmapImageFileType.Png);
long len = imageData.Length;
byte[] bytes = new byte[len];
System.Runtime.InteropServices.GCHandle pinnedArray = System.Runtime.InteropServices.GCHandle.Alloc(bytes, System.Runtime.InteropServices.GCHandleType.Pinned);
System.IntPtr pointer = pinnedArray.AddrOfPinnedObject();
// Do your stuff...
imageData.GetBytes(pointer, new System.IntPtr(len));
pinnedArray.Free();
using (AppKit.NSImage nsImage = new AppKit.NSImage(cgImage, new System.Drawing.SizeF(cgImage.Width, cgImage.Height)))
{
// ImageView.Image = nsImage;
// And now ? How to get the image bytes ?
// https://theconfuzedsourcecode.wordpress.com/2016/02/24/convert-android-bitmap-image-and-ios-uiimage-to-byte-array-in-xamarin/
// https://stackoverflow.com/questions/5645157/nsimage-from-byte-array
// https://stackoverflow.com/questions/53060723/nsimage-source-from-byte-array-cocoa-app-xamarin-c-sharp
// https://gist.github.com/zhangao0086/5fafb1e1c0b5d629eb76
// https://www.quora.com/What-is-a-way-to-convert-UIImage-to-a-byte-array-in-Swift?share=1
// https://stackoverflow.com/questions/17112314/converting-uiimage-to-byte-array
} // End Using nsImage
} // End Using cgImage
} // End Sub TestCapture
public static CoreGraphics.CGImage CreateImage(System.UInt32 displayId)
{
System.IntPtr handle = System.IntPtr.Zero;
try
{
handle = CGDisplayCreateImage(displayId);
return new CoreGraphics.CGImage(handle);
}
finally
{
if (handle != System.IntPtr.Zero)
{
CFRelease(handle);
}
}
} // End Sub CreateImage
} // End Class OSxScreenshot
} // End Namespace rtaStreamingServer