masked_blit Allegro 4.2.1 - c++

when I using masked_blit function:
masked_blit( animations[which], buffer, 0, 0, x, y, animations[which]->w, animations[which]->h )
I get wrong colors on my buffer, my bmp is something like shifted or shocked.
this is my problem.
thanks for help

The most likely cause of your problem is that you are using a video bitmap with this call and your hardware does not support it OR the two bitmaps (animations[] and buffer) are different color depths.
To discard the possibility of hardware not supporting this feature, check that the GFX_HW_VRAM_BLIT_MASKED bit in the gfx_capabilities flag is set on your PC.
If they are the same color depth, but your hardware does not support the feature, you can always use the calls with memory bitmaps as a source, so the animations will reside in RAM instead of the video memory.
Source:
Allegro 4.2.1 manual (pdf) - Sections 1.15.3 masked_blit and 1.9.13 gfx_capabilities

Related

How does HWSURFACE differ from a texture?

When making a surface in SDL, there's an option to use the HWSURFACE tag, which I imagine means the surface is handled by the GPU instead of the CPU. But now SDL2 has textures, and I'm wondering, what's the difference? Will there be and performance difference using hardware surfaces instead of textures? Do they behave the same?
I've tried googling all over, but I can only find info on regular software surfaces.
There are no textures support in SDL1, and there is no HWSURFACE (or any other surface flag) in SDL2. flags in SDL_CreateRGBSurface in SDL2 commented as "The flags are obsolete and should be set to 0". There is no sane way to mix them.
Specifying SDL_HWSURFACE would cause the surface to be created using video memory. PCs without GPUs still have video memory and it is faster to put data in that memory area onto the screen than to copy it from system RAM.
Textures are uploaded to the GPU's own dedicated RAM and data must be passed through that memory in order to be put on the screen at all. SDL2 no longer has the SDL_HWSURFACE flag because the rendering subsystem uses the GPU via OpenGL or Direct3D and cannot use the old way to get graphics on the screen.

How are pixels drawn at the lowest level

I can use setpixel (GDI) to set any pixel on the screen a colour.
So how would I reproduce Setpixel in in the lowest assembly level. What actually is happening that triggers the instructions that say, ok sens a byte a position x in the framebuffer.
setpixel most probably just calculates address of given pixel using formula:
pixel = (frame_start + y * frame_width) + x
then it simply *pixel = COLOR
You can actually use CreateDIBSection to create your own buffers and associate it with DeviceContext, then you can modify pixels at the low level using formula as above. This is usefull if you have your own graphics library like AGG.
Learning about GDI I like to look into WINE source code, here you can see how complicated it actually is (dibdrv_SetPixel):
http://fossies.org/dox/wine-1.6.1/gdi32_2dibdrv_2graphics_8c_source.html
it must take into account also clipping regions, and also different pixel sizes and probably other features. Also it is possible that some drivers might somehow accelerate this in hardware, but I have not heard of it.
If you want to recreate setpixel you need to know how your graphics hardware works. Most hardware manufacutrres follow at least the VESA standard, see here. This standard specifies that you can set the display mode using interrupt 0x10.
Once the display mode is set the memory region displayed is defined in the standard and you can simply write directly to display memory.
Advanced graphics hardware deviates from the standard (because it only covers the basics). So the above does not work for advanced features. You'll have to resort to the gpu documentation.
The "how" is always depends on "what", what I mean is that for different setups there are different methods, different systems different methods, what is common is that they are usually not allowing you to do it directly i.e. write to a memory address that will be displayed.
Some devices with dedicated setup may allow you to do that ( like some consoles do as far as I know ) but there you will have to do some locking or other utility work to make it work as it should.
Since in modern PCs Graphics Accelerators are fused into the video cards ( one counter example is the Voodoo 1 which needed a video card in order to operate, since it was just a 3D accelerator ) the GPU usually holds the memory that it will draw from the framebuffer in it's own memory making it inaccessible from the outside.
So generally you would say here is a memory address "download" the data into your own GPU memory and show it on screen, and this is where the desktop composition comes in. Since video cards suffer from this transfer all the time it is in fact faster to send the data required to draw and let the GPU do the drawing. So Aero is just a visual but as far as I know the desktop compositor works regardless of Aero making the drawing GPU dependent.
So technically low level functions such as SetPixel are software since Windows7 because of the things I mentioned above so solely because you just cant access the memory directly. So what I think it probably does is that for every HDC there is a bitmap and when you use the set pixel you simply set a pixel in that bitmap that is later sent to the GPU for display.
In case of DOS or other old tech. it is probably just an emulation in the same way it is done for GDI.
So in the light of these,
So how would I reproduce Setpixel in in the lowest assembly level.
it is just probably a copy to a memory location, but windows integrates the window surfaces and it's frambuffer that you will never get direct access. One way to emulate what it does is to make a bitmap get it memory pointer and simple set the pixel manually then tell windows to show this bitmap on screen.
What actually is happening that triggers the instructions that say, ok sens a byte a position x in the framebuffer.
Like I said before it really depends on the environment what is done at the moment you make this call, and the code that needs to be executed comes from different places , some are done by Microsoft some are done by the GPU's manufacturer and these all together produce the result that pixel you see on your screen.
For to set a pixel to the framebuffer using a videomode with 32 bit color we need the address of the pixel and the color of the pixel.
With the address and the color we can simple use a move instruction to set the color to the framebuffer.
Sample with using the EDI-Register as a 32bit addressregister(default segmentregister is DS) for to address the framebuffer with the move instruction.
x86 intel syntax:
mov edi, Framebuffer ; load the address(upper left corner) into the EDI-Register
mov DWORD [edi], Color ; write the color to the address of DS:EDI
The first instruction load the EDI-Register with the address of the framebuffer and the second instruction write the color to the framebuffer.
Hint for to calculate the address of a pixel inside of the frambuffer:
Some Videomodes are using maybe a lorger scanline with more bytes for the horizontal resolution, with a part outside of the visible view.
Dirk

Real time drawing in GDI

I'm currently writing a 3D renderer (for fun and research), so I need a way to draw my framebuffer to a window. Since I'm doing all of my calculations on CPU, the drawing needs to be as fast as possible.
One of my goals is to use no existing graphics library (OpenGL/DirectX) so the drawing to the screen is pure Win32. In my research I've found a couple of ways to create and draw bitmaps and now I'm looking for the best one.
My current implementation uses a bitmap created with CreateDIBSection(), which is drawn to my window DC using BitBlt().
CreateDIBSection() give me a pointer to my bitmap bytes so I can manipulate it without copying. Using this method I achieve an update rate of about 260 FPS (without any rendering done).
This seems a bit slow, so I'm looking for optimizations.
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
How can I make sure my DIB bitmap and window are compatible?
Are there methods of drawing an bitmap which are faster than my current implementation?
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
Very few systems run in a palette mode any more, so it seems unlikely this is an issue for you.
Aside from palettes, some GDI functions also cause a color matching conversion to be applied if the source bitmap and the destination have different gamuts. BitBlt, however, does not do this type of color matching, so you're not paying a price for that.
How can I make sure my DIB bitmap and window are compatible?
You don't. You can use DIBs (which are Device-Independent Bitmaps) or compatible (device-dependent) bitmaps. It's possible that your DIB bitmap matches the current mode of your device. For example, if you're using a 32 bpp DIB, and your display is in that same mode, then no conversion is necessary. If you want a bitmap that's guaranteed to be in the same mode as your device, then you can't use a DIB and all the nice properties it provides for predictable pixel layout and format.
Are there methods of drawing an bitmap which are faster than my current implementation?
The limitation is most likely in getting the data from system memory to graphics adapter memory. To get around that limitation, you need a faster graphics bus, or you need to render directly into graphic memory, which means you'd need to do your computation on the GPU rather than the CPU.
If you're rendering a 1920 x 1080 pixel image at 24 bits per pixel, that's close to 6 MB for your frame buffer. That's an awful lot of data. If you're doing that 260 times per second, that's actually pretty impressive.
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
It's conceivable, but the only way to know would be to measure it. And the results might vary from machine to machine because of differences in the graphics adapter (and which bus they use).

Draw on DeviceContext from COLORREF[]

I have a pointer to a COLORREF buffer, something like: COLORREF* buf = new COLORREF[x*y];
A subroutine fills this buffer with color-information. Each COLORREF represents one pixel.
Now I want to draw this buffer to a device context. My current approach works, but is pretty slow (== ~200ms per drawing, depending on the size of the image):
for (size_t i = 0; i < pixelpos; ++i)
{
// Get X and Y coordinates from 1-dimensional buffer.
size_t y = i / wnd_size.cx;
size_t x = i % wnd_size.cx;
::SetPixelV(hDC, x, y, buf[i]);
}
Is there a way to do this faster; all at once, not one pixel after another?
I am not really familiar with the GDI. I heard about al lot of APIs like CreateDIBitmap(), BitBlt(), HBITMAP, CImage and all that stuff, but have no idea how to apply it. It seems all pretty complicated...
MFC is also welcome.
Any ideas?
Thanks in advance.
(Background: the subroutine I mentioned above is an OpenCL kernel - the GPU calculates an Mandelbrot image and safes it in the COLORREF buffer.)
EDIT:
Thank you all for your suggestions. The answers (and links) gave me some insight into Windows graphics programming. The performance is now acceptable (semi-realtime-scrolling into the Mandelbrot works :)
I ended up with the following solution (MFC):
...
CDC dcMemory;
dcMemory.CreateCompatibleDC(pDC);
CBitmap mandelbrotBmp;
mandelbrotBmp.CreateBitmap(clientRect.Width(), clientRect.Height(), 1, 32, buf);
CBitmap* oldBmp = dcMemory.SelectObject(&mandelbrotBmp);
pDC->BitBlt(0, 0, clientRect.Width(), clientRect.Height(), &dcMemory, 0, 0, SRCCOPY);
dcMemory.SelectObject(oldBmp);
mandelbrotBmp.DeleteObject();
So basically CBitmap::CreateBitmap() saved me from using the raw API (which I still do not fully understand). The example in the documentation of CDC::CreateCompatibleDC was also helpful.
My Mandelbrot is now blue - using SetPixelV() it was red. But I guess that has something to do with CBitmap::CreateBitmap() interpreting my buffer, not really important.
I might try the OpenGL suggestion because it would have been the much more logical choice and I wanted to try OpenCL under Linux anyway.
Under the circumstances, I'd probably use a DIB section (which you create with CreateDIBSection). A DIB section is a bitmap that allows you to access the contents directly as an array, but still use it with all the usual GDI functions.
I think that'll give you the best performance of anything based on GDI. If you need better, then #Kornel is basically correct -- you'll need to switch to something that has more direct support for hardware acceleration (DirectX or OpenGL -- though IMO, OpenGL is a much better choice for the job than DirectX).
Given that you're currently doing the calculation in OpenCL and depositing the output in a color buffer, OpenGL would be the really obvious choice. In particular, you can have OpenCL deposit the output in an OpenGL texture, then you have OpenGL draw a quad using that texture. Alternatively, since you're just putting the output on screen anyway, you could just do the calculation in an OpenGL fragment shader (or, of course, a DirectX pixel shader), so you wouldn't put the output into memory off-screen just so you can copy the result onto the screen. If memory serves, the Orange book has a Mandelbrot shader as one of its examples.
Yes, sure, that's slow. You are making a round-trip through the kernel and video device driver for each individual pixel. You make it fast by drawing to memory first, then update the screen in one fell swoop. That takes, say, CreateDIBitmap, CreateCompatibleDc() and BitBlt().
This isn't a good time and place for an extensive tutorial on graphics programming. It is well covered by any introductory text on GDI and/or Windows API programming. Everything you'll need to know you can find in Petzold's seminal Programming Windows.
Since you already have an array of pixels, you can directly use BitBlt to transfer it to the window's DC. See this link for a partial example:
http://msdn.microsoft.com/en-us/library/aa928058.aspx

Sharing a texture between direct3d and opengl?

I know mixing OpenGL and DirectX is not recommended but I'm trying to build a bridge between two different applications that use separate graphics API:s and I'm hoping there is a technique for sharing data, specifically textures.
I have a texture that is created in Direct3D like this:
d3_device-> CreateTexture(width, height,
1, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,
&texture, NULL);
Is there any way I can use this texture from OpenGL without taking a roundtrip through system memory?
YES. As previously posted (see below) there should exists at least one solution.
I found two possible solutions:
On nvidia cards a new extension was integrated in the 256 dirvers. see http://developer.download.nvidia.com/opengl/specs/WGL_NV_DX_interop.txt
DXGI is the driving force to composite all windows in Vista and Windows 7. see msdn.microsoft.com/en-us/library/ee913554.aspx
I have not yet made experience with either solution, but I hope I will find some time to test one of them. But for me the first one seems to be the easier one.
[I think it should be possible. In recent windows version (Vista and 7) one can see a preview of any window content in the taskbar (whether its GDI, Direct3D, or OpenGL).
To my knowledge OpenGL preview was not supported in earlier windows versions. So at least in the newer version there should be a possibility to couple or share render contexts even between different processes...
This is also true for other modern platforms which share render contexts system wide to make different rendering effects.]
I think it is not possible. As both have different models of a texture.
You cannot access the texture memory directly without either directX or openGL.
Otherway around: If it is possible, you should be able to retrieve the texture address, its pitch, width and other (hardware dependant) memory layout informations and create a dummytexture in the other system and push the retrieved data into your just created texture object. This is not possible
Obviously, this will not work on any descend hardware, and if so it would not be very portable.
I don't think it's possible without downloading the data into host memory and re-uploading it into device memory.
It's possible now.
Use ANGLE OpenGL API instead native OpenGL.
You can share direct3d texture with EGL_ANGLE_d3d_texture_client_buffere extension.
https://github.com/microsoft/angle/wiki/Interop-with-other-DirectX-code#demo
No.
Think of it like sharing an image in photoshop and another image viewer. You would need a memory management library that those two applications shared.