Draw array of bits(rgb) in windows - c++

I have an array of raw rgb data.
I would like to know how can I draw this pixels on the screen in Windows OS?
Now I use API function DrawDIBits, but I must turn up my image data.

I always use SetDiBitsToDevice, but drawDIBits could be okay as well (haven't checked).
As for the upside-down nature of the windows blit functions:
There is a workaround. If you pass a BITMAPINFOHEADER or BITMAPINFO structure to the function just negate the value in the bitmap-height member. This will tell GDI to do the blit as if the height would be positive, but interpret the data as beeing stored in a top-down order.
You may get a nice speed improvement by this "hack" as well.
If you want to shuffle the byte-order of the pixels (e.g. turn ARGB into BGRA or so) you can use the BITMAPV4HEADER structure and tell GDI how your pixel-data is organized. That's a functionality that is rarely used but works since WIN98. I'd say it's save to use it these days..

If you mean drawing it without reversing the (R,G,B) into (B,G,R), I don't know an automatic way to do that.
If you mean drawing it without padding each line to a multiple of 4 pixels, you can do it by drawing each line one at a time. It will be slow, though.

Related

How to use accurate GDI font size?

A generic question: in GDI, the type of font size is int, so it is not accurate when do zoom out/ zoom in for the text draw by GDI in the window,
Is there a simple method to use a float font size in GDI to make font size accurate?
Thanks a lot for your kindly help!
GDI text does not scale linearly, and not just because it uses only integer sizes, but also because of hinting which tries to make text look better when rendered at a resolution on the order of its stroke width. If you double the height of a GDI font, the width may not exactly double. GDI text was once hardware accelerated but no longer is on modern versions of Windows. That doesn't matter much for performance because it's relatively simple and efficient, and hardware is fast.
GDI+ text will scale linearly in the sense that doubling the height will double the width (well, it's really close). But text may look "fuzzier" because it uses grayscale antialiasing instead of ClearType subpixel rendering. GDI+ text tends to be slower that GDI because more work is done in software.
DirectWrite (which runs on Direct2D) scales linearly and generally looks very good. It's harder to write efficient Direct2D/DirectWrite code, and, depending on your requirements, you might have to drop back to GDI if you also need to print. If you try to write DPI-aware programs, you may find yourself having to do a lot of conversions between DirectWrite's device-independent coordinates for graphics and mouse coordinates that are still device-dependant. DirectWrite is hardware accelerated, so i'ts fast if you use it efficiently by caching lots of intermediate data structures.
With CreateFont (and CreateFontIndirect) you specify the font size in pixels, so it remains accurate to the pixel regardless of zooming (within the constraints of the sizes available in the font that's selected--if you use a bitmapped font, scaling may be limited or nonexistent).
If you're using CreatePointFont to create the font, you specify the font size in tenths of a point, which usually works out to smaller than a pixel, so it gets rounded to the nearest pixel. If you really want to be sure you're specifying the height to the nearest pixel, however, you probably want to use CreateFont/CreateFontIndirect instead of CreatePointFont though.

Real time drawing in GDI

I'm currently writing a 3D renderer (for fun and research), so I need a way to draw my framebuffer to a window. Since I'm doing all of my calculations on CPU, the drawing needs to be as fast as possible.
One of my goals is to use no existing graphics library (OpenGL/DirectX) so the drawing to the screen is pure Win32. In my research I've found a couple of ways to create and draw bitmaps and now I'm looking for the best one.
My current implementation uses a bitmap created with CreateDIBSection(), which is drawn to my window DC using BitBlt().
CreateDIBSection() give me a pointer to my bitmap bytes so I can manipulate it without copying. Using this method I achieve an update rate of about 260 FPS (without any rendering done).
This seems a bit slow, so I'm looking for optimizations.
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
How can I make sure my DIB bitmap and window are compatible?
Are there methods of drawing an bitmap which are faster than my current implementation?
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
Very few systems run in a palette mode any more, so it seems unlikely this is an issue for you.
Aside from palettes, some GDI functions also cause a color matching conversion to be applied if the source bitmap and the destination have different gamuts. BitBlt, however, does not do this type of color matching, so you're not paying a price for that.
How can I make sure my DIB bitmap and window are compatible?
You don't. You can use DIBs (which are Device-Independent Bitmaps) or compatible (device-dependent) bitmaps. It's possible that your DIB bitmap matches the current mode of your device. For example, if you're using a 32 bpp DIB, and your display is in that same mode, then no conversion is necessary. If you want a bitmap that's guaranteed to be in the same mode as your device, then you can't use a DIB and all the nice properties it provides for predictable pixel layout and format.
Are there methods of drawing an bitmap which are faster than my current implementation?
The limitation is most likely in getting the data from system memory to graphics adapter memory. To get around that limitation, you need a faster graphics bus, or you need to render directly into graphic memory, which means you'd need to do your computation on the GPU rather than the CPU.
If you're rendering a 1920 x 1080 pixel image at 24 bits per pixel, that's close to 6 MB for your frame buffer. That's an awful lot of data. If you're doing that 260 times per second, that's actually pretty impressive.
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
It's conceivable, but the only way to know would be to measure it. And the results might vary from machine to machine because of differences in the graphics adapter (and which bus they use).

Fastest way of plotting a point on screen in MFC C++ app

I have an application that contains many millions of 3d rgb points that form an image when plotted. What is the fastest way of getting them to screen in a MFC application? I've tried CDC.SetPixelV in conjunction with a bitmap, which seems quite slow, and am looking towards a Direct3D or OpenGL window in my MFC view class. Any other good places to look?
Double buffering is your solution. There are many examples on codeproject. Check this one for example
Sounds like a point cloud. You might find some good information searching on that term.
3D hardware is the fastest way to take 3D points and get them into a 2D display, so either Direct3D or OpenGL seem like the obvious choices.
If the number of points is much greater than the number of pixels in your display, then you'll probably first want to cull points that are trivially outside the view. You put all your points in some sort of spatial partitioning structure (like an octree) and omit the points inside any node that's completely outside the viewing frustrum. This reduces the amount of data you have to push from system memory to GPU memory, which will likely be the bottleneck. (If your point cloud is static, and you're just building a fly through, and if your GPU has enough memory, you could skip the culling, send all the data at once, and then just update the transforms for each frame.)
If you don't want to use the GPU and instead write a software renderer, you'll want to render to a bitmap that's in the same pixel format as your display (to eliminate the chance of the blit need to do any pixels formatting as it blasts the bitmap to the display). For reasonable window sizes, blitting at 30 frames per second is feasible, but it might not leave much time for the CPU to do the rendering.

Read Framebuffer-texture like an 1D array

I am doing some gpgpu calculations with GL and want to read my results from the framebuffer.
My framebuffer-texture is logically an 1D array, but I made it 2D to have a bigger area. Now I want to read from any arbitrary pixel in the framebuffer-texture with any given length.
That means all calculations are already done on GPU side and I only need to pass certain data to the cpu that could be aligned over the border of the texture.
Is this possible? If yes is it slower/faster than glReadPixels on the whole image and then cutting out what I need?
EDIT
Of course I know about OpenCL/CUDA but they are not desired because I want my program to run out of the box on (almost) any platform.
Also I know that glReadPixels is very slow and one reason might be that it offers some functionality that I do not need (Operating in 2D). Therefore I asked for a more basic function that might be faster.
Reading the whole framebuffer with glReadPixels just to discard it all except for a few pixels/lines would be grossly inefficient. But glReadPixels lets you specify a rect within the framebuffer, so why not just restrict it to fetching the few rows of interest ? So you maybe end up fetching some extra data at the start and end of the first and last lines fetched, but I suspect the overhead of that is minimal compared with making multiple calls.
Possibly writing your data to the framebuffer in tiles and/or using Morton order might help structure it so a tighter bounding box can be be found and the extra data retrieved minimised.
You can use a pixel buffer object (PBO) to transfer pixel data from the framebuffer to the PBO, then use glMapBufferARB to read the data directly:
http://www.songho.ca/opengl/gl_pbo.html

Draw scaled images using CImageList

If you have images stored in a CImageList, is there an easy way to render them (with proper transparency) scaled to fit a given target rectangle? CImageList::DrawEx takes size information but I don't believe it does scaling, only cropping?
I guess you could render them to an offscreen bitmap, then StretchBlt() them to either your device or another offscreen bitmap, letting StretchBlt() do the scaling... Getting the transparency to carry over correctly will require some fiddling though, depending on your circumstances you may need to use AlphaBlend() instead.
My opinion is that most of the Win32 image handling code, and therefore by extension their MFC equivalents, like CImageList, CIcon, CImage, CBitmap, ... are inadequate for today's graphics needs. Especially handling per-pixel transparency hardly ever works consistently. I usually store my images in a CImage and use ::AlphaBlend() everywhere to get them to DC, or I use GetDIBits()/SetDIBits() and directly manipulate the RGBA entries (not very practical for doing scaling and similar operations, I admit). On the other hand I understand what it's like having to maintain code that uses these things already and wanting to update them to give them a bit of a modern look...