I'm reading the moust cursor pixmap data from the StdFBShmem_t structure, as defined in the IOFrameBufferShared API.
Everything works fine, 90% of the time. However, I have noticed that some applications on the Mac set a cursor in a different format. According to the documentation for the data structures, the cursor pixmap format should always be in the same format as the frame buffer. My frame buffer is 32 bpp. I expect the pixmap data to be in the format 0xAARRGGBB, which it is (most of the time). However, in some cases, I'm reading data that looks like a mask. Specifically, the pixels in this data will either be 0x00FFFFFF or `0x00000000. This looks to me to be a mask for separate pixel data stored somewhere else.
As far as I can tell, the only application that uses this cursor pixel format is Qt Creator, but I need to work with all applications, so I'd like to sort this out.
The code I'm using to read the cursor pixmap data is:
NSAutoreleasePool *autoReleasePool = [[NSAutoreleasePool alloc] init];
NSPoint mouseLocation = [NSEvent mouseLocation];
NSArray *allScreens = [NSScreen screens];
NSEnumerator *screensEnum = [allScreens objectEnumerator];
NSScreen *screen;
NSDictionary *screenDesc = nil;
while ((screen = [screensEnum nextObject]))
{
NSRect screenFrame = [screen frame];
screenDesc = [screen deviceDescription];
if (NSMouseInRect(mouseLocation, screenFrame, NO))
break;
}
if (screen)
{
kern_return_t err;
CGDirectDisplayID displayID = (CGDirectDisplayID) [[screenDesc objectForKey:#"NSScreenNumber"] pointerValue];
task_port_t taskPort = mach_task_self();
io_service_t displayServicePort = CGDisplayIOServicePort(displayID);
io_connect_t displayConnection =0;
err = IOFramebufferOpen(displayServicePort,
taskPort,
kIOFBSharedConnectType,
&displayConnection);
if (KERN_SUCCESS == err)
{
union
{
vm_address_t vm_ptr;
StdFBShmem_t *fbshmem;
} cursorInfo;
vm_size_t size;
err = IOConnectMapMemory(displayConnection,
kIOFBCursorMemory,
taskPort,
&cursorInfo.vm_ptr,
&size,
kIOMapAnywhere | kIOMapDefaultCache | kIOMapReadOnly);
if (KERN_SUCCESS == err)
{
// For some reason, cursor data is not always in the same format as
// the frame buffer. For this reason, we need some way to detect
// which structure we should be reading.
QByteArray pixData(
(const char*)cursorInfo.fbshmem->cursor.rgb24.image[currentFrame],
m_mouseInfo.currentSize.width() * m_mouseInfo.currentSize.height() * 4);
IOConnectUnmapMemory(displayConnection,
kIOFBCursorMemory,
taskPort,
cursorInfo.vm_ptr);
} // IOConnectMapMemory
else
qDebug() << "IOConnectMapMemory Failed:" << err;
IOServiceClose(displayConnection);
} // IOServiceOpen
else
qDebug() << "IOFramebufferOpen Failed:" << err;
}// if screen
[autoReleasePool release];
My questions are:
How can I detect if the cursor is a different format
from the framebuffer?
Where can I read the actual pixel data? The bm18Cursor
structure contains a mask section, but it's not in the
right place for me to be reading it using the code
above.
How can I detect if the cursor is a different format from the framebuffer?
The cursor is in the framebuffer. It can't be in a different format than itself.
There is no way to tell what format it's in (x-radar://problem/7751503). There would be a way to divine at least the number of bytes per pixel if you could tell how many frames the cursor has, but since you can't (that information isn't set as of 10.6.1 — x-radar://problem/7751530), you are left trying to figure out two factors of a four-factor product (bytes per pixel × width × height × number of frames, where you only have the width, the height, and the product). And even if you can figure out those missing two factors, you still don't know what order the bytes are in or whether the color components are premultiplied by the alpha component.
Where can I read the actual pixel data?
In the cursor member of the shared-cursor-memory structure.
You should define IOFB_ARBITRARY_SIZE_CURSOR before including the I/O Kit headers. Cursors can be any size now, not just 16×16, which is the size you expect when you don't define that constant. As an example, the usual Mac arrow cursor is 24×24, the “Windows” arrow cursor in CrossOver is 32×32, and the arrow cursor in X11 is 10×16.
However, in some cases, I'm reading data that looks like a mask. Specifically, the pixels in this data will either be 0x00FFFFFF or 0x00000000. This looks to me to be a mask for separate pixel data stored somewhere else.
That sounds to me more like 16-bit pixels with an 8-bit alpha channel. At least it's more probably 5-6-5 than 5-5-5.
As far as I can tell, the only application that uses this cursor pixel format is Qt Creator, but I need to work with all applications, so I'd like to sort this out.
I'm able to capture the current cursor in that app just fine with my new cursor-capturing app. Is there a specific part of the app I should hit to make it show me a specific cursor?
You might try the CGSCreateRegisteredCursorImage function, as demonstrated by Karsten in a comment on my weblog.
It is a private function, so it may change or go away at any time, so you should check whether it exists and hold IOFramebuffer in reserve, but as long as it does exist, you may find it more reliable than the complex and thinly-documented IOFramebuffer.
Related
In a camera application bitmap pixel arrays are retrieved from a streaming camera.
The pixel arrays are captured by writing them to a named pipe, where on the other end of the pipe, ffmpeg retrieves them and creates an AVI file.
I will need to create one custom frame (with custom text on), and pipe its pixels as the first frame in the resulting movie.
The question is how can I use a TBitmap (for convenience) to
Create a X by Y monochrome (8 bit) bitmap from scratch, with
custom text on. I want the background to be white, and the text to
be black. (Mostly figured this step out, see below.)
Retrieve the pixel array that I can send/write to the pipe
Step 1: The following code creates a TBitmap and writes text on it:
int w = 658;
int h = 492;
TBitmap* bm = new TBitmap();
bm->Width = w;
bm->Height = h;
bm->HandleType = bmDIB;
bm->PixelFormat = pf8bit;
bm->Canvas->Font->Name = "Tahoma";
bm->Canvas->Font->Size = 8;
int textY = 10;
string info("some Text");
bm->Canvas->TextOut(10, textY, info.c_str());
The above basically concludes step 1.
The writing/piping code expects a byte array with the bitmaps pixels; e.g.
unsigned long numWritten;
WriteFile(mPipeHandle, pImage, size, &numWritten, NULL);
where pImage is a pointer to a unsigned char buffer (the bitmaps pixels), and the size is the length of this buffer.
Update:
Using the generated TBitmap and a TMemoryStream for transferring data to the ffmpeg pipeline does not generate the proper result. I get a distorted image with 3 diagonal lines on it.
The buffersize for the camera frame buffers that I receive are are exactly 323736, which is equal to the number of pixels in the image, i.e. 658x492.
NOTE I have concluded that this 'bitmap' is not padded. 658 is not divisible by four.
The buffersize I get after dumping my generated bitmap to a memory stream, however, has the size 325798, which is 2062 bytes larger than it is supposed to be. As #Spektre pointed out below, this discrepancy may be padding?
Using the following code for getting the pixel array;
ByteBuffer CustomBitmap::getPixArray()
{
// --- Local variables --- //
unsigned int iInfoHeaderSize=0;
unsigned int iImageSize=0;
BITMAPINFO *pBitmapInfoHeader;
unsigned char *pBitmapImageBits;
// First we call GetDIBSizes() to determine the amount of
// memory that must be allocated before calling GetDIB()
// NB: GetDIBSizes() is a part of the VCL.
GetDIBSizes(mTheBitmap->Handle,
iInfoHeaderSize,
iImageSize);
// Next we allocate memory according to the information
// returned by GetDIBSizes()
pBitmapInfoHeader = new BITMAPINFO[iInfoHeaderSize];
pBitmapImageBits = new unsigned char[iImageSize];
// Call GetDIB() to convert a device dependent bitmap into a
// Device Independent Bitmap (a DIB).
// NB: GetDIB() is a part of the VCL.
GetDIB(mTheBitmap->Handle,
mTheBitmap->Palette,
pBitmapInfoHeader,
pBitmapImageBits);
delete []pBitmapInfoHeader;
ByteBuffer buf;
buf.buffer = pBitmapImageBits;
buf.size = iImageSize;
return buf;
}
So final challenge seem to be to get a bytearray that has the same size as the ones coming from the camera. How to find and remove the padding bytes from the TBitmap code??
TBitmap has a PixelFormat property to set the bit depth.
TBitmap has a HandleType property to control whether a DDB or a DIB is created. DIB is the default.
Since you are passing BMPs around between different systems, you really should be using DIBs instead of DDBs, to avoid any corruption/misinterpretation of the pixel data.
Also, this line of code:
Image1->Picture->Bitmap->Handle = bm->Handle;
Should be changed to this instead:
Image1->Picture->Bitmap->Assign(bm);
// or:
// Image1->Picture->Bitmap = bm;
Or this:
Image1->Picture->Assign(bm);
Either way, don't forget to delete bm; afterwards, since the TPicture makes a copy of the input TBitmap, it does not take ownership.
To get the BMP data as a buffer of bytes, you can use the TBitmap::SaveToStream() method, saving to a TMemoryStream. Or, if you just want the pixel data, not the complete BMP data (ie, without BMP headers - see Bitmap Storage), you can use the Win32 GetDiBits() function, which outputs the pixels in DIB format. You can't obtain a byte buffer of the pixels for a DDB, since they depend on the device they are rendered to. DDBs are only usable in-memory in conjunction with HDCs, you can't pass them around. But you can convert a DIB to a DDB once you have a final device to render it to.
In other words, get the pixels from the camera, save them to a DIB, pass that around as needed (ie, over the pipe), and then do whatever you need with it - save to a file, convert to DDB to render onscreen, etc.
This is just an addon to existing answer (with additional info after the OP edit)
Bitmap file-format has align bytes on each row (so there usually are some bytes at the end of each line that are not pixels) up to some ByteLength (present in bmp header). Those create the skew and diagonal like lines. In your case the size discrepancy is 4 bytes per row:
(xs + align)*ys + header = size
(658+ 4)*492 + 94 = 325798
but beware the align size depends on image width and bmp header ...
Try this instead:
// create bmp
Graphics::TBitmap *bmp=new Graphics::TBitmap;
// bmp->Assign(???); // a) copy image from ???
bmp->SetSize(658,492); // b) in case you use Assign do not change resolution
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf8bit;
// bmp->Canvas->Draw(0,0,???); // b) copy image from ???
// here render your text using
bmp->Canvas->Brush->Style=bsSolid;
bmp->Canvas->Brush->Color=clWhite;
bmp->Canvas->Font->Color=clBlack;
bmp->Canvas->Font->Name = "Tahoma";
bmp->Canvas->Font->Size = 8;
bmp->Canvas->TextOutA(5,5,"Text");
// Byte data
for (int y=0;y<bmp->Height;y++)
{
BYTE *p=(BYTE*)bmp->ScanLine[y]; // pf8bit -> BYTE*
// here send/write/store ... bmp->Width bytes from p[]
}
// Canvas->Draw(0,0,bmp); // just renfder it on Form
delete bmp; bmp=NULL;
mixing GDI winapi calls for pixel array access (bitblt etc...) with VCL bmDIB bitmap might cause problems and resource leaks (hence the error on exit) and its also slower then usage of ScanLine[] (if coded right) so I strongly advice to use native VCL functions (as I did in above example) instead of the GDI/winapi calls where you can.
for more info see:
#4. GDI Bitmap
Delphi / C++ builder Windows 10 1709 bitmap operations extremely slow
Draw tbitmap with scale and alpha channel faster
Also you mention your image source is camera. If you use pf8bit it mean its palette indexed color which is relatively slow and ugly if native GDI algo is used (to convert from true/hi color camera image) for better transform see:
Effective gif/image color quantization?
simple dithering
I am creating a program that allows you to view fractals like the Mandelbrot or Julia set. I would like to render them as quickly as possible. I would love a way to put an array of uint8_t pixel values onto the screen. The array is formatted like this...
{r0,g0,b0,r1,g1,b1,...}
(A one dimensional array or RGB color values)
I know I have the proper data because before I just set individual points and it worked...
for(int i = 0;i < height * width;++i) {
//setStroke and point are functions that I made that together just draw a colored point
r.setStroke(data[i*3],data[i*3+1],data[i*3+2]);
r.point(i % r.window.w,i / r.window.w);
}
This is a pretty slow operation especially if the screen is big (which I would like it to be)
Is there any faster way to just put all the data onto the screen.
I tried doing something like this
void* pixels;
int pitch;
SDL_Texture* img = SDL_CreateTexture(ren,
SDL_GetWindowPixelFormat(win),SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_LockTexture(img, NULL, &pixels, &pitch);
memcpy(pixels, data, window.w * 3 * window.h);
SDL_UnlockTexture(img);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
I have no idea what I'm doing so please have mercy
Edit (thank you for comments :))
So here is what I do now
SDL_Texture* img = SDL_CreateTexture(ren, SDL_PIXELFORMAT_RGB888,SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_UpdateTexture(img,NULL,&data[0],window.w * 3);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
But I get this Image... which is not what it should look like
I am thinking that my data is just formatted wrong, right now it is formatted as an array of uint8_t in RGB order. Is there another way I should be formatting it (note I do not need an alpha channel)
I'm currently working on a printing plugin with C++, and starting working with TextOut to print the text I want. It works great, but apparently, the positions that TextOut uses as params are in pixels. Is there a way to set them to be in cm or mm? or any other?.
Well, it's pretty simple. The coordinates are not in pixels, but they are in the coordinates of your mapping mode. It just so happens that the default mapping mode of a DC is MM_TEXT which has each coordinate unit to be one pixel on the device.
Change your mapping mode using SetMapMode() to the coordinate system you prefer to use. You can also play with window extents, viewport extents, and origins to customize it however you want. You might want to look at the documentation for SetMapMode() and the MM_LOMETRIC (or MM_HIMETRIC) mapping mode.
There should be special handling implemented for printing. Basically, you need to perform conversion based on HIMETRIC units. The paper size is in HIMETRIC units.
Here is the code that will help you get started (MFC-based):
if (pDC->IsPrinting())
{
// printable area in millimeters
int nWidth = pDC->GetDeviceCaps(HORZSIZE);
int nHeight = pDC->GetDeviceCaps(VERTSIZE);
CDC ScreenDC;
ScreenDC.CreateIC(_T("DISPLAY"), NULL, NULL, NULL);
int nPixelsPerInchX = ScreenDC.GetDeviceCaps(LOGPIXELSX);
int nPixelsPerInchY = ScreenDC.GetDeviceCaps(LOGPIXELSY);
// paper size is in HIMETRIC units. we need to convert
CSize PaperSize(MulDiv(nWidth,nPixelsPerInchX*100,HIMETRIC_PER_INCH),
MulDiv(nHeight,nPixelsPerInchY*100,HIMETRIC_PER_INCH));
// now we need to calculate zoom ratio so the layer content fits on page
double fZoomX = (double)PaperSize.cx/(double)m_DocSize.cx;
double fZoomY = (double)PaperSize.cy/(double)m_DocSize.cy;
m_PrintZoom = min(fZoomX, fZoomY);
ResetViewSize(TRUE);
if (pDC->IsKindOf(RUNTIME_CLASS(CPreviewDC)))
{
pDC->SetMapMode(MM_ANISOTROPIC);
pDC->SetWindowExt(nPixelsPerInchX, nPixelsPerInchY);
pDC->SetViewportExt(pDC->GetDeviceCaps(LOGPIXELSX), pDC->GetDeviceCaps(LOGPIXELSY));
pDC->SetViewportOrg(0,0);
pDC->SetWindowOrg(0,0);
}
}
I have a program which runs in a window using OpenGL (VS2012 with freeglut 2.8.1). Basically at every time step (run via a call to glutPostRedisplay from my glutIdleFunc hook) I call my own draw function followed by a call to glFlush to display the result. Then I call my own screenShot function which uses the glReadPixels function to dump the pixels to a tga file.
The problem with this setup is that the files are empty when the window gets minimised. That is to say, the output from glReadPixels is empty; How can I avoid this?
Here is a copy of the screenShot function I am using (I am not the copyright holder):
//////////////////////////////////////////////////
// Grab the OpenGL screen and save it as a .tga //
// Copyright (C) Marius Andra 2001 //
// http://cone3d.gz.ee EMAIL: cone3d#hot.ee //
//////////////////////////////////////////////////
// (modified by me a little)
int screenShot(int const num)
{
typedef unsigned char uchar;
// we will store the image data here
uchar *pixels;
// the thingy we use to write files
FILE * shot;
// we get the width/height of the screen into this array
int screenStats[4];
// get the width/height of the window
glGetIntegerv(GL_VIEWPORT, screenStats);
// generate an array large enough to hold the pixel data
// (width*height*bytesPerPixel)
pixels = new unsigned char[screenStats[2]*screenStats[3]*3];
// read in the pixel data, TGA's pixels are BGR aligned
glReadPixels(0, 0, screenStats[2], screenStats[3], 0x80E0,
GL_UNSIGNED_BYTE, pixels);
// open the file for writing. If unsucessful, return 1
std::string filename = kScreenShotFileNamePrefix + Function::Num2Str(num) + ".tga";
shot=fopen(filename.c_str(), "wb");
if (shot == NULL)
return 1;
// this is the tga header it must be in the beginning of
// every (uncompressed) .tga
uchar TGAheader[12]={0,0,2,0,0,0,0,0,0,0,0,0};
// the header that is used to get the dimensions of the .tga
// header[1]*256+header[0] - width
// header[3]*256+header[2] - height
// header[4] - bits per pixel
// header[5] - ?
uchar header[6]={((int)(screenStats[2]%256)),
((int)(screenStats[2]/256)),
((int)(screenStats[3]%256)),
((int)(screenStats[3]/256)),24,0};
// write out the TGA header
fwrite(TGAheader, sizeof(uchar), 12, shot);
// write out the header
fwrite(header, sizeof(uchar), 6, shot);
// write the pixels
fwrite(pixels, sizeof(uchar),
screenStats[2]*screenStats[3]*3, shot);
// close the file
fclose(shot);
// free the memory
delete [] pixels;
// return success
return 0;
}
So how can I print the screenshot to a TGA file regardless of whether Windows decides to actually display the content on the monitor?
Note: Because I am trying to keep a visual record of the progress of a simulation, I need to print every frame, regardless of whether it is being rendered. I realise that last statement is a bit of a contradiction, since I need to render the frame in order to produce the screengrab. To rephrase; I need glReadPixels (or some alternative function) to produce the updated state of my program at every step so that I can print it to a file, regardless of whether windows will choose to display it.
Sounds like you're running afoul of the pixel ownership problem.
Render to a FBO and use glReadPixels() to slurp images out of that instead of the front buffer.
I would suggest keeping the last rendered frame stored in memory and updating this memory's contents whenever an update is called and there is actual pixel data in the new render. Either that or you could use the accum perhaps, though I cant quite recall how it stores older frames (it may just end up updating out so fast that it stores no render data as well.
Another solution might be to use a shader to manually render each frame and write the result to a file
Maybe not really big, but a hundred frames or something. Is the only way to load it in by making an array and loading each image individually?
load_image() is a function I made which loads the images and converts their BPP.
expl[0] = load_image( "explode1.gif" );
expl[1] = load_image( "explode2.gif" );
expl[2] = load_image( "explode3.gif" );
expl[3] = load_image( "explode4.gif" );
...
expl[99] = load_image( "explode100.gif" );
Seems like their should be a better way.. at least I hope.
A common technique is spritesheets, in which a single, large image is divided into a grid of cells, with each cell containing one frame of an animation. Often, all animation frames for any game entity are placed on a single, sometimes huge, sprite sheet.
maybe simplify your loading with a utility function that builds a filename for each iteration of a loop:
LoadAnimation(char* isFileBase, int numFrames)
{
char szFileName[255];
for(int i = 0; i < numFrames; i++)
{
// append the frame number and .gif to the file base to get the filename
sprintf(szFileName, "%s%d.gif", isFileBase, i);
expl[i] = load_image(szFileName);
}
}
Instead of loading as a grid, stack all the frames in one vertical strip (same image). Then you only need to know how many rows per frame and you can set a pointer to the frame row offset. You end up still having contiguous scan lines that can be displayed directly or trivially chewed off into separate images.