Started up remote debugging a C++ project today on a Win 7 machine running in VMWare and was astonished to see the following pattern on a random memory location:
Who might code this (it's not me!) and for what reason?? Just curious if anyone has seen something like this.
It looks like a rendered mask for a font (each character in a font (typeface+size+style) is rendered once in-memory, then blitted to the output surface) using 8bpp, which suggests you've got font anti-aliasing enabled.
I'm assuming your project involves a GUI, you might be looking at a shared-memory area that GDI uses for storing rasterized fonts.
If not, then this might just be leftover memory from a previous process or OS component that wasn't zeroed before being used by your application.
It's hard to say. Possibly memory used to buffer some fonts (in this case, zeros), or even buffered printer or screen content.
Related
while it seems such a basic question, I open this thread after extensive stackoverflow and Google search, which helped but not in a definitive way.
My C++ code draws to an array used as RGB framebuffer, hardware acceleration is not possible for this original graphics algorithm and I want my software to directly generate the images (no hardware acceleration) also for portability reasons.
However, I would like to have hardware acceleration to copy my C++ array into the window.
In the past, I have outputted to BMP files the images generated by my code, now I want to display them as efficiently as possible in realtime, on the screen's window.
I wrote a Win32 program that has a window freely resizeable by the user, so far so good.
What is the best / fastest / most efficient way to (possibly v-synched) blit/copy my system memory RGB array (I can adapt my graphics generator to any RGB format, so to avoid conversions) into the window that can change size any moment? (I am handling resizing via WM messages, and anyway I'm not asking how to resize/rescale the image array, I will do it all by myself, I mention resizeable window just to say that its size cannot be fixed after creation, but I will rescale it with my own code or, more precisely, I will simply reallocate the RGB array and generate a new image when the user changes the window's size)
NOTE: the image array should be allocated by my own program, but if having a system pointer will make things much more efficient (e.g. directly in video RAM) then it's ok.
Windows10 is the target OS, but at least Windows7 retro-compatibility would be better. Hardware is PC's, even low end, released in the last 5-10 years till now, so I guess all will have GDI hardware acceleration or such, like Direct2D or how is it called DirectDraw now.
NOTE: please do NOT propose the use of any library, I will have to deal directly with GDI calls, Direct2D or whatever is most efficient to do the task, but without using third-party libraries.
Thank you very much, I don't know much about Windows GUI coding, I've done console mode only until now, and some windows (thus I am familiar with WM messages) and basic DeviceContext graphics output. From my research (but I wanted to ask here because I'm in doubt really, and lotsa post I've read date back to 2010) SetDIBitsToDevice should be the best solution to my problem, but if so I think I would still need some way to synchronize with VSynch to avoid, if possible, the annoying tearing/flickering.
I have an OpenGL test application that is producing incredibly unusual results. When I start up the application it may or may not feature a severe graphical bug.
It might produce an image like this:
http://i.imgur.com/JwPoDrh.jpg
Or like this:
http://i.imgur.com/QEYwhBY.jpg
Or just the correct image, like this:
http://i.imgur.com/zUJbwCM.jpg
The scene consists of one spinning colored cube (made of 12 triangles) with a simple shader on it that colors the pixels based on the absolute value of their model space coordinates. The junk faces appear to spin with the cube as though they were attached to it and often junk triangles or quads flash on the screen briefly as though they were rendered in 2D.
The thing I find most unusual about this is that the behavior is highly inconsistent, starting the exact same application repeatedly without me personally changing anything else on the system will produce different results, sometimes bugged, sometimes not, the arrangement of the junk faces produced isn't consistent either.
I can't really post source code for the application as it is very lengthy and the actual OpenGL calls are spread out across many wrapper classes and such.
This is occurring under the following conditions:
Windows 10 64 bit OS (although I have observed very similar behavior under Windows 8.1 64 bit).
AMD FX-9590 CPU (Clocked at 4.7GHz on an ASUS Sabertooth 990FX).
AMD 7970HD GPU (It is a couple years old and occasionally areas of the screen in 3D applications become scrambled, but nothing on the scale of what I'm experiencing here).
Using SDL (https://www.libsdl.org/) for window and context creation.
Using GLEW (http://glew.sourceforge.net/) for OpenGL.
Using OpenGL versions 1.0, 3.3 and 4.3 (I'm assuming SDL is indeed using the versions I instructed it to).
AMD Catalyst driver version 15.7.1 (Driver Packaging Version listed as 15.20.1062.1004-150803a1-187674C, although again I have seen very similar behavior on much older drivers).
Catalyst Control Center lists my OpenGL version as 6.14.10.13399.
This looks like a broken graphics card to me. Most likely some problem with the memory (either the memory itself, or some soldering problem). Artifacts like those you see can happen if for some reason setting the address for a memory operation does not fully settle or happen at all, before starting the read; that can happen due to a bad connection between the GPU and the memory (solder connections failed) or because the memory itself failed.
Solution: Buy new graphics card. You may try out what happens if you resolder it using a reflow process; there are some tutorials on how to do this DIY, but a proper reflow oven gives better results.
Is it possible to display a black dot by changing values in the screen(video ie monitor) memory map in RAM using a c program?
I don't want to use any library functions as my primary aim is to learn how to develop a simple OS.
I tried accessing the starting screen memory map ie 0xA0000 (in C).
I tried to run the program but got a Segmentation Fault since no direct access is provided. In super user, the program gets executed without any change.
Currently I am testing in VirtualBox.
A "real" operating system will not use the framebuffer at address 0xA0000, so you can't draw on the screen by writing to it directly. Instead your OS probably has proper video drivers that will talk to the hardware in various very involved ways.
In short there's no easy way to do what you want to do on a modern OS.
On the other hand, if you want to learn how to write your own OS, then it would be very good practice to try to write a minimal kernel that can output to the VGA text framebuffer at 0xB8000 and maybe then the VGA graphic framebuffer at 0xA0000.
You can start using those framebuffers and drawing on the screen almost immediately after the BIOS jumps to your kernel, with a minimal amount of setting up. You could do that directly from real mode in maybe a hundred lines of assembler tops, or perhaps in C with a couple lines of assembler glue first.
Even simpler would be to have GRUB set up the hardware, boot your minimal kernel, and you can directly write to it in a couple lines.
Short answer is no because the frame buffer on modern operating systems is setup as determined by the vbios and kernel driver(s). It depends on amount of VRAM present on the board, the size of the GART, physical Ram present and a whole bunch of other stuff (VRAM reservation, whether it should be visible to CPU or not, etc). On top of this, modern OS's are utilizing multiple back buffers and flipping the HW to display between these buffers, so even if you could directly poke to the frame buffer, the address would change from frame to frame.
If you are interesting in do this for learning purposes, I would recommend creating a simple OGL or D3D (for example) 'function' which takes a 'fake' system allocated frame buffer and presents it to the screen using regular HW operations.
You could even set the refresh up on a timer to fake update.
Then your fake OS would just write pixels to the fake system memory buffer and this rendering function would take care of displaying it as if it were real.
I've done a bit of research but haven't found anything useful so far.
In short I would like to be able to create a bitmap/canvas in memory and use an api which has functions for drawing primitive shapes and text on that bitmap and read the memory directly. This should be completely done in memory and not need a window system or something like Qt or GTK.
Why? I'm writing for the raspberry pi and am interfacing with a 256x64 4 bit greyscale OLED display over spi. This is all working fine so far. I've written a couple of functions for writing text etc, but am wondering if there is already a library I can use. I double buffer the display so I just need to manipulate the image in memory and then read the entire picture in one.
I can easily do this in windows but am not sure the best way to do this in linux
I've used Image Magick to do similar things. It supports Bitmap and SVG. http://www.imagemagick.org/script/index.php
Well, I want to know.. maybe others too.
Is it possible to control each pixel separately on a screen by programming, especially C or C++?
Do you need special control over the drivers for the current screen? Are there operating systems which allow you to change pixels (for example draw a message/overlay on top of everything)?
Or does windows support this maybe in it's WinApi?
Edit:
I am asking this question because I want to make my computer warn me when I'm gaming and my processor gets too hot. I mainly use Windows but I have a dual boot ubuntu distro.
The lower you go, the more hassle you'll run into.
If you want raw pixel manipulation you might check out http://www.libsdl.org/ which helps you mitigate the hassle of creating surfaces/windows and that kind of stuff.
Linux has a few means to get you even lower if you want (ie without "windows" or "xwindows" or anything of the sort, just the raw screen), look in to the Linux Frame Buffer if you're interested in that.
Delving even lower (such as doing things with your own OS), the BIOS will let you go into certain video modes, this is what OS installers tend to use (at least they used to, some of the fancier ones don't anymore). This isn't the fastest way of doing things, but can get you into the realm of showing pixels in a few assembly instructions.
And of course if you wanted to do your own OS and take advantage of the video card (bypass the BIOS), you're then talking about writing video drivers and such, which is obviously a substantial amount of work :)
Re overlay messages ontop of the screen and that sort of thing, windows does support that sort of thing, so I'm sure you can do it with the WinAPI, although there are likely libraries that would make that easier. I do know you don't need to delve too deep to do that sort of thing though.
Let's look at each bit at a time:
Is it possible to control each pixel separately on a screen by
programming, especially C or C++?
Possibly. It really depends on the graphics architecture, and in many modern systems, the actual screen surface (that is "the bunch of pixels appearing on the screen") is not directly under software control - at least not from "usermode" (that is, from an application that you or I can write - you need to write driver code, and you need to co-operate sufficiently with the existing graphics driver).
It is generally accepted that drawing the data into an off-screen buffer and using a BitBlt [BitBlockTransfer] function to copy the content onto the screen is the prefferred way to do this sort of thing.
So, in reality, you probably can't manipulate each pixel ON the screen - but you may be able to appear like you do.
Do you need special control over the drivers for the current screen?
Assuming you could get direct access to the screen memory, your code certainly will have to have cooperation with the driver - otherwise, who's to say that what you want to appear on the screen doesn't get overwritten by something else [e.g. you want full screen access, and the clock-updater updates the time on screen every once a minute on top of what you draw, etc].
You may be able to set the driver into a mode where you have a "hole" that allows you to access the screen memory as a big "framebuffer". I don't think there's an easy way to do this in Windows. I don't remember one from back in 2003-2005 when I wrote graphics drivers for a living.
Are there operating systems which allow you to change pixels (for
example draw a message/overlay on top of everything)?
It is absolutely possible to create an overlay layer in the hardware of modern graphics cards. That's generally how video playback works - the video is played into a piece of framebuffer memory that is overlaid on top of the other graphics. You need help from the driver, and this is definitely available in the Windows API, via DirectX as far as I remember.
Or does windows support this maybe in it's WinApi?
Probably, but to answer precisely, we need to understand better what you are looking to do.
Edit: In your particular use-case, I would have thought that making sounds or ejecting the CD/DVD drive may be a more suitable opton. It can be hard to overlay something on top of the graphics drawn by a game, because games often try to use as much as possible of the available graphics resource, and you will probably have a hard time finding a way that works even for the most simple use-cases - never mind something that works for multiple different categories of games using different drawing/engine/graphics libraries. I'm also not entirely sure it's anything to worry overly about, since modern CPU's are pretty tolerant to overheating, so the CPU will just slow down, possibly grind to a halt, but it will not break - even if you take the heatsink off, it won't go wrong [no, I don't suggest you try this!]
Every platform supports efficient raw pixel block transfer "aka BitBlt()", so if you really want to go to frame buffer level you can allocate a bitmap and use pointers to set its contents directly then with one line of code efficiently flip this memory chunk into video ram buffer. Of course it is not as efficient as working with PCI framebuffers directly, but on the other hand this approach (BitBlt) was fast enough even in Win95 days to port Wolfenstein 3d on Pentium CPU WITHOUT the use of WinG.
HOWEVER, a care must be taken while creating this bitmap to match its format (i.e. RGB 16 bits, or 32 bits etc...) with actual mode that device is in, otherwise the graphics sub-system will do a lengthy recoding/dithering which will completely kill your speed.
So depending on your goals, If you want a 3d game your performance will suck with this approach. If you want just to render some shapes and dont need more than 10-15fps - this will work without diving into any device-driver levels.
Here is a few tips for overlaying in Windows:
hdc = GetDC(0);//returns hdc for the whole screen and is VERY fast
You can take HDC for screen and do a BItBlt(hdc, ..... SRCCOPY) to flip blocks of raster efficiently. There are also pre-defined Windows Handles for desktop but I dont recall the exact mechanics but if you are on multiple monitors you can get HDC for each desktop, look at "GetDesktopWindow", "GetDC" and the like...