I can use setpixel (GDI) to set any pixel on the screen a colour.
So how would I reproduce Setpixel in in the lowest assembly level. What actually is happening that triggers the instructions that say, ok sens a byte a position x in the framebuffer.
setpixel most probably just calculates address of given pixel using formula:
pixel = (frame_start + y * frame_width) + x
then it simply *pixel = COLOR
You can actually use CreateDIBSection to create your own buffers and associate it with DeviceContext, then you can modify pixels at the low level using formula as above. This is usefull if you have your own graphics library like AGG.
Learning about GDI I like to look into WINE source code, here you can see how complicated it actually is (dibdrv_SetPixel):
http://fossies.org/dox/wine-1.6.1/gdi32_2dibdrv_2graphics_8c_source.html
it must take into account also clipping regions, and also different pixel sizes and probably other features. Also it is possible that some drivers might somehow accelerate this in hardware, but I have not heard of it.
If you want to recreate setpixel you need to know how your graphics hardware works. Most hardware manufacutrres follow at least the VESA standard, see here. This standard specifies that you can set the display mode using interrupt 0x10.
Once the display mode is set the memory region displayed is defined in the standard and you can simply write directly to display memory.
Advanced graphics hardware deviates from the standard (because it only covers the basics). So the above does not work for advanced features. You'll have to resort to the gpu documentation.
The "how" is always depends on "what", what I mean is that for different setups there are different methods, different systems different methods, what is common is that they are usually not allowing you to do it directly i.e. write to a memory address that will be displayed.
Some devices with dedicated setup may allow you to do that ( like some consoles do as far as I know ) but there you will have to do some locking or other utility work to make it work as it should.
Since in modern PCs Graphics Accelerators are fused into the video cards ( one counter example is the Voodoo 1 which needed a video card in order to operate, since it was just a 3D accelerator ) the GPU usually holds the memory that it will draw from the framebuffer in it's own memory making it inaccessible from the outside.
So generally you would say here is a memory address "download" the data into your own GPU memory and show it on screen, and this is where the desktop composition comes in. Since video cards suffer from this transfer all the time it is in fact faster to send the data required to draw and let the GPU do the drawing. So Aero is just a visual but as far as I know the desktop compositor works regardless of Aero making the drawing GPU dependent.
So technically low level functions such as SetPixel are software since Windows7 because of the things I mentioned above so solely because you just cant access the memory directly. So what I think it probably does is that for every HDC there is a bitmap and when you use the set pixel you simply set a pixel in that bitmap that is later sent to the GPU for display.
In case of DOS or other old tech. it is probably just an emulation in the same way it is done for GDI.
So in the light of these,
So how would I reproduce Setpixel in in the lowest assembly level.
it is just probably a copy to a memory location, but windows integrates the window surfaces and it's frambuffer that you will never get direct access. One way to emulate what it does is to make a bitmap get it memory pointer and simple set the pixel manually then tell windows to show this bitmap on screen.
What actually is happening that triggers the instructions that say, ok sens a byte a position x in the framebuffer.
Like I said before it really depends on the environment what is done at the moment you make this call, and the code that needs to be executed comes from different places , some are done by Microsoft some are done by the GPU's manufacturer and these all together produce the result that pixel you see on your screen.
For to set a pixel to the framebuffer using a videomode with 32 bit color we need the address of the pixel and the color of the pixel.
With the address and the color we can simple use a move instruction to set the color to the framebuffer.
Sample with using the EDI-Register as a 32bit addressregister(default segmentregister is DS) for to address the framebuffer with the move instruction.
x86 intel syntax:
mov edi, Framebuffer ; load the address(upper left corner) into the EDI-Register
mov DWORD [edi], Color ; write the color to the address of DS:EDI
The first instruction load the EDI-Register with the address of the framebuffer and the second instruction write the color to the framebuffer.
Hint for to calculate the address of a pixel inside of the frambuffer:
Some Videomodes are using maybe a lorger scanline with more bytes for the horizontal resolution, with a part outside of the visible view.
Dirk
Related
Is it possible to display a black dot by changing values in the screen(video ie monitor) memory map in RAM using a c program?
I don't want to use any library functions as my primary aim is to learn how to develop a simple OS.
I tried accessing the starting screen memory map ie 0xA0000 (in C).
I tried to run the program but got a Segmentation Fault since no direct access is provided. In super user, the program gets executed without any change.
Currently I am testing in VirtualBox.
A "real" operating system will not use the framebuffer at address 0xA0000, so you can't draw on the screen by writing to it directly. Instead your OS probably has proper video drivers that will talk to the hardware in various very involved ways.
In short there's no easy way to do what you want to do on a modern OS.
On the other hand, if you want to learn how to write your own OS, then it would be very good practice to try to write a minimal kernel that can output to the VGA text framebuffer at 0xB8000 and maybe then the VGA graphic framebuffer at 0xA0000.
You can start using those framebuffers and drawing on the screen almost immediately after the BIOS jumps to your kernel, with a minimal amount of setting up. You could do that directly from real mode in maybe a hundred lines of assembler tops, or perhaps in C with a couple lines of assembler glue first.
Even simpler would be to have GRUB set up the hardware, boot your minimal kernel, and you can directly write to it in a couple lines.
Short answer is no because the frame buffer on modern operating systems is setup as determined by the vbios and kernel driver(s). It depends on amount of VRAM present on the board, the size of the GART, physical Ram present and a whole bunch of other stuff (VRAM reservation, whether it should be visible to CPU or not, etc). On top of this, modern OS's are utilizing multiple back buffers and flipping the HW to display between these buffers, so even if you could directly poke to the frame buffer, the address would change from frame to frame.
If you are interesting in do this for learning purposes, I would recommend creating a simple OGL or D3D (for example) 'function' which takes a 'fake' system allocated frame buffer and presents it to the screen using regular HW operations.
You could even set the refresh up on a timer to fake update.
Then your fake OS would just write pixels to the fake system memory buffer and this rendering function would take care of displaying it as if it were real.
I'm currently writing a 3D renderer (for fun and research), so I need a way to draw my framebuffer to a window. Since I'm doing all of my calculations on CPU, the drawing needs to be as fast as possible.
One of my goals is to use no existing graphics library (OpenGL/DirectX) so the drawing to the screen is pure Win32. In my research I've found a couple of ways to create and draw bitmaps and now I'm looking for the best one.
My current implementation uses a bitmap created with CreateDIBSection(), which is drawn to my window DC using BitBlt().
CreateDIBSection() give me a pointer to my bitmap bytes so I can manipulate it without copying. Using this method I achieve an update rate of about 260 FPS (without any rendering done).
This seems a bit slow, so I'm looking for optimizations.
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
How can I make sure my DIB bitmap and window are compatible?
Are there methods of drawing an bitmap which are faster than my current implementation?
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
Very few systems run in a palette mode any more, so it seems unlikely this is an issue for you.
Aside from palettes, some GDI functions also cause a color matching conversion to be applied if the source bitmap and the destination have different gamuts. BitBlt, however, does not do this type of color matching, so you're not paying a price for that.
How can I make sure my DIB bitmap and window are compatible?
You don't. You can use DIBs (which are Device-Independent Bitmaps) or compatible (device-dependent) bitmaps. It's possible that your DIB bitmap matches the current mode of your device. For example, if you're using a 32 bpp DIB, and your display is in that same mode, then no conversion is necessary. If you want a bitmap that's guaranteed to be in the same mode as your device, then you can't use a DIB and all the nice properties it provides for predictable pixel layout and format.
Are there methods of drawing an bitmap which are faster than my current implementation?
The limitation is most likely in getting the data from system memory to graphics adapter memory. To get around that limitation, you need a faster graphics bus, or you need to render directly into graphic memory, which means you'd need to do your computation on the GPU rather than the CPU.
If you're rendering a 1920 x 1080 pixel image at 24 bits per pixel, that's close to 6 MB for your frame buffer. That's an awful lot of data. If you're doing that 260 times per second, that's actually pretty impressive.
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
It's conceivable, but the only way to know would be to measure it. And the results might vary from machine to machine because of differences in the graphics adapter (and which bus they use).
Well, I want to know.. maybe others too.
Is it possible to control each pixel separately on a screen by programming, especially C or C++?
Do you need special control over the drivers for the current screen? Are there operating systems which allow you to change pixels (for example draw a message/overlay on top of everything)?
Or does windows support this maybe in it's WinApi?
Edit:
I am asking this question because I want to make my computer warn me when I'm gaming and my processor gets too hot. I mainly use Windows but I have a dual boot ubuntu distro.
The lower you go, the more hassle you'll run into.
If you want raw pixel manipulation you might check out http://www.libsdl.org/ which helps you mitigate the hassle of creating surfaces/windows and that kind of stuff.
Linux has a few means to get you even lower if you want (ie without "windows" or "xwindows" or anything of the sort, just the raw screen), look in to the Linux Frame Buffer if you're interested in that.
Delving even lower (such as doing things with your own OS), the BIOS will let you go into certain video modes, this is what OS installers tend to use (at least they used to, some of the fancier ones don't anymore). This isn't the fastest way of doing things, but can get you into the realm of showing pixels in a few assembly instructions.
And of course if you wanted to do your own OS and take advantage of the video card (bypass the BIOS), you're then talking about writing video drivers and such, which is obviously a substantial amount of work :)
Re overlay messages ontop of the screen and that sort of thing, windows does support that sort of thing, so I'm sure you can do it with the WinAPI, although there are likely libraries that would make that easier. I do know you don't need to delve too deep to do that sort of thing though.
Let's look at each bit at a time:
Is it possible to control each pixel separately on a screen by
programming, especially C or C++?
Possibly. It really depends on the graphics architecture, and in many modern systems, the actual screen surface (that is "the bunch of pixels appearing on the screen") is not directly under software control - at least not from "usermode" (that is, from an application that you or I can write - you need to write driver code, and you need to co-operate sufficiently with the existing graphics driver).
It is generally accepted that drawing the data into an off-screen buffer and using a BitBlt [BitBlockTransfer] function to copy the content onto the screen is the prefferred way to do this sort of thing.
So, in reality, you probably can't manipulate each pixel ON the screen - but you may be able to appear like you do.
Do you need special control over the drivers for the current screen?
Assuming you could get direct access to the screen memory, your code certainly will have to have cooperation with the driver - otherwise, who's to say that what you want to appear on the screen doesn't get overwritten by something else [e.g. you want full screen access, and the clock-updater updates the time on screen every once a minute on top of what you draw, etc].
You may be able to set the driver into a mode where you have a "hole" that allows you to access the screen memory as a big "framebuffer". I don't think there's an easy way to do this in Windows. I don't remember one from back in 2003-2005 when I wrote graphics drivers for a living.
Are there operating systems which allow you to change pixels (for
example draw a message/overlay on top of everything)?
It is absolutely possible to create an overlay layer in the hardware of modern graphics cards. That's generally how video playback works - the video is played into a piece of framebuffer memory that is overlaid on top of the other graphics. You need help from the driver, and this is definitely available in the Windows API, via DirectX as far as I remember.
Or does windows support this maybe in it's WinApi?
Probably, but to answer precisely, we need to understand better what you are looking to do.
Edit: In your particular use-case, I would have thought that making sounds or ejecting the CD/DVD drive may be a more suitable opton. It can be hard to overlay something on top of the graphics drawn by a game, because games often try to use as much as possible of the available graphics resource, and you will probably have a hard time finding a way that works even for the most simple use-cases - never mind something that works for multiple different categories of games using different drawing/engine/graphics libraries. I'm also not entirely sure it's anything to worry overly about, since modern CPU's are pretty tolerant to overheating, so the CPU will just slow down, possibly grind to a halt, but it will not break - even if you take the heatsink off, it won't go wrong [no, I don't suggest you try this!]
Every platform supports efficient raw pixel block transfer "aka BitBlt()", so if you really want to go to frame buffer level you can allocate a bitmap and use pointers to set its contents directly then with one line of code efficiently flip this memory chunk into video ram buffer. Of course it is not as efficient as working with PCI framebuffers directly, but on the other hand this approach (BitBlt) was fast enough even in Win95 days to port Wolfenstein 3d on Pentium CPU WITHOUT the use of WinG.
HOWEVER, a care must be taken while creating this bitmap to match its format (i.e. RGB 16 bits, or 32 bits etc...) with actual mode that device is in, otherwise the graphics sub-system will do a lengthy recoding/dithering which will completely kill your speed.
So depending on your goals, If you want a 3d game your performance will suck with this approach. If you want just to render some shapes and dont need more than 10-15fps - this will work without diving into any device-driver levels.
Here is a few tips for overlaying in Windows:
hdc = GetDC(0);//returns hdc for the whole screen and is VERY fast
You can take HDC for screen and do a BItBlt(hdc, ..... SRCCOPY) to flip blocks of raster efficiently. There are also pre-defined Windows Handles for desktop but I dont recall the exact mechanics but if you are on multiple monitors you can get HDC for each desktop, look at "GetDesktopWindow", "GetDC" and the like...
I was wondering if it was faster to render a single quad the size of the window with a texture the size of a window than to draw the bitmap directly to the window using double buffering coupled with the platform specific way of drawing to a window.
The initial setup for textures tends to be relatively slow, but once that's done the drawing is quite fast -- in a typical case where graphics memory is available, it'll upload the texture to the memory on the graphics cards during initial setup, and after that, all the drawing will happen from there. At the same time, that initial upload will also typically include full a full mipmap down to 1x1 resolution, so you're uploading a bit more than just the full-resolution texture.
With platform specific drawing, you usually don't have quite as much work up-front. If only part of the bitmap is visible, only the visible part will be uploaded. If the bitmap is going to be scaled, it'll typically scale it on the CPU and send it to the card at the current scale (and never upload anything resembling a mipmap). OTOH, virtually every time something needs to be redrawn, it'll end up re-sending the bitmap data for the newly exposed area. It doesn't take much of that to lose the (often minor anyway) advantage of minimizing what was sent to start with.
Using textures is usually a lot faster, since most native drawing APIs aren't hardware accelerated.
It will very probably depend on the graphics card and driver.
As a relative newcomer to MFC, I see Device Contexts (DCs) a lot. I vaguely understand that it's something to do with drawing, but the specifics are not very well explained anywhere that I can find. What does creating a "compatible Device Context" mean, and why is it important? What does SelectObject do, and how must I make a DC compatible first?
A Device Context is just a place that drawing occurs, so if you have two different DC's, you're drawing in two different places. Kind of like a file handle.
Device Contexts can refer to real-estate on screen, or to bitmaps that just reside in memory, and probably other places, too, those are just the two I can think of at the moment.
Compatible contexts are ones that have the same underlying pixel organization, by which is meant number of bits per pixel, bytes per pixel, color organization and so forth. Memory bitmap device contexts can have any organization you want, but your screen contexts are going to be related (eventually) to buffers on your graphics card, which will (depending on mode, etc) have a very specific pixel organization.
Having compatible contexts means its efficient to transfer image data between them, because little or no translation of the data is required. At the other extreme, if you have a 256 color palette, 8 bit map and you try to blit it to a screen that has 8 bits each of RGBA per pixel, every last pixel will require significant massaging as it is copied and so copying incompatible bitmaps is very much slower. According to the Win32 SDK documentation, at least BitBlt() and StretchBlt() "convert the source color format to match the destination format", so it can be done.
Investigate CreateCompatibleDC() and CreateCompatibleBitmap() as starting points for how to create drawing contexts that are compatible with already existing ones.
SelectObject() controls which resources are currently active within the device context. A context has a current pen, brush, font, and bitmap. These make a lot of the other GDI calls simpler by allowing you to specifiy fewer parameters. For instance, you don't have to specify the font when you use TextOut(), but if you want to change the font, that's where SelectObject() comes in. If you feed SelectObject() a handle to a font, the return value is a handle to the font that was in effect, and subsequent operations use the new font. Behavior is the same for the other kinds of resources, pens, brushes, etc.
(Old question but this is shown when googling...)
I'm afraid that, for beginners, the selected answer can be a bit misleading.
Please keep in mind that MFC wraps the Win32 API, so we need to see from the Win32 level, to better understand what's going on.
To understand why there is Device Context, we should understand GDI(Graphics Device Interface).
Then, why GDI exist? - For device independence. To achieve this, Microsoft made Graphic Objects (Brush, Pen, ...), each of which wraps and abstracts all the device dependency issues.
Now we don't have to care about different devices, and that's the whole point of GDI.
So we need to hold Graphic Objects(Brush, Pen, Bitmap,...) in some data structure, and that's the Device Context.
Then what is a SelectObject function?
Literally, it enables DC to "select" a Graphic Object. I.e, we use SelectObject to change a graphic object handle stored in the DC to another graphic object handle that we want to use.
Then what is a compatible device context?
compatible device context(=memory device context) uses memory, rather than devices.
From MSDN (emphasis mine):
To enable applications to place output in memory rather than sending
it to an actual device, use a special device context for bitmap
operations called a memory device context. A memory DC enables the
system to treat a portion of memory as a virtual device. It is an
array of bits in memory that an application can use temporarily to
store the color data for bitmaps created on a normal drawing surface.
Because the bitmap is compatible with the device, a memory DC is also
sometimes referred to as a compatible device context.
The memory DC stores bitmap images for a particular device. An
application can create a memory DC by calling the CreateCompatibleDC
function.
Compatible DC can be used, for example, to reduce flickering because we can save the bitmap in the memory and show it once, not showing every time the image changes.
Following MSDN docs would be helpful to newbies (including me).
Device Contexts, from the MFC viewpoint.
Device Contexts, from the Win32 viewpoint, and this following section.