I seen that using interrupts it is only possible to draw on low resolution. Let's say that I am making simple OS that would display on any resolution, like 4k, 1920x1080. I have Intel x64 processor with Intel HD graphics and Nvidia card. I am on laptop
On IBM PC architecture video memory (framebuffer) is mapped to conventional memory so you can draw by direct writing to video memory. Here is a nice doc on that. But beware of screen flickering which happens when you write video memory at the moment of redrawing screen by videoadapter. In order to avoid this you can use double buffering technique which is described here.
Related
So I'm building a system based on a raspberry pi 4 running Linux (image created through buildroot) driving a Led matrix (64x32 RGB connectors) and I'm very confused about the software stack of linux. I'd like to be able to use OpenGL capabilities on a small resolution screen that would then be transfered to a driver that would actually drive the Led matrix.
I've read about DRM, KMS, GEM and other systems and I've concluded the best way to go about it would be to have the following working scheme:
User space: App
| OpenGL
v
Kernel space: DRM -GEM-> Led device driver
|
v
Hardware: Led Matrix
Some of this may not make a lot of sense since the concepts are still confusing to me.
Essentially, the app would make OpenGL calls that would generate frames that could be mapped to buffers on the DRM which could be shared with the Led device driver which would then drive the leds in the matrix.
Would something like this be the best way about it?
I could just program some dumb buffer cpu implementation but I'd rather take this as a learning experience.
OpenGL renders into a buffer (called "framebuffer" that is usually displayed onto the screen. But rendering into an off screen buffer (as the name implies) does not render onto the screen but into an array, which can be read by C/C++. There is one indirection on modern operating systems. Usually you have multiple windows visible on your screen. Therefore the application can't render onto the screen itself but into a buffer maneged by the windowing system, which is then composited into one final image. Linux uses Wayland, multiple Wayland clients can create and draw into the Wayland compositor's buffers.
If you only want to display your application just use a off screen buffer.
If you want to display another application read it's framebuffer by writing your own Wayland compositor. Note this may be hard (I've never done that) if you want to use hardware acceleration.
I am Lukas and I have question about plotting pixels on screen in protected/long mode and video generation is OS in general. My question is that, how I can display something on screen in high resolution, such as 1920*1080, or better for me, in 1680*1050 ( because of my little bit old monitor), and how I can make specific driver for my video card - Intel Integrated HD 620 on main computer. On my dev Server I have some integrated VGA controller on motherboard - Intel 4 Series Chipset Integrated Graphics controller ( rev 3 ) - I think to control this specific card I just need standard VGA Controller stuff - like it's ports, DAC and so on, but I don't know how to make driver for my external GPU ( I mean not Integrated on motherboard ), which is Fujitsu 4 Series Chipset Integrated Graphics Controller, and where I can get information about it, and also where can I get information about this whole subject and maybe some tutorial. Thanks very much for your help!
PS: Sorry for my English, I am not Englishman.
My question is that, how I can display something on screen in high resolution, such as 1920*1080, or better for me, in 1680*1050 ( because of my little bit old monitor), and how I can make specific driver for my video card, and where I can get information about it, and also where can I get information about this whole subject and maybe some tutorial. Thanks very much for your help!
The first problem is setting a usable video mode. For this there are 3 cases:
use the BIOS/VBE functions. This is easy enough in early boot code, but horribly messy after boot.
use UEFI functions (GOP, UGA). This is easy in early boot code, but impossible after boot.
write native video drivers for every possible video card. This is impossible - you can't write drivers for video cards that don't exist yet, so every time a new video card is released there will be no driver for however long it takes to write one (and for "sole developer" you will never have time to write one).
The "most sane" choice is "boot loader sets up default video mode using whatever firmware provides and gives OS details for frame buffer (then native video driver may be used to change video modes after early boot if there ever is a suitable native video driver)".
Note that for all of these cases (BIOS, UEFI and native driver) there's a selection process involved - you want to get information from the monitor describing what it supports, and information from the video card about what it supports, and information from the OS about what it supports; and then use that all the information to find the best video mode that is supported by everything. You don't want to setup a 1920*1600 video mode just because the video card supports it (and then have your old monitor showing a black screen because it doesn't support that specific video mode).
For putting a pixel; the formula is mostly "address = video_frame_buffer_address + y * bytes_per_line + x * bytes_per_pixel"; where video_frame_buffer_address is the virtual address for wherever you felt like mapping the frame buffer; and the physical address of the frame buffer, and the values for bytes_per_line and bytes_per_pixel, are details that will come from BIOS or UEFI or a native video driver.
For displaying anything on the screen, putting pixels like this is a huge performance disaster (you don't want the overhead of "address = video_frame_buffer_address + y * bytes_per_line + x * bytes_per_pixel" calculation for every pixel). Instead, you want higher level functions (e.g. to draw characters, draw lines, fill rectangles, ...) so that you can calculate a starting address once, then adjust that address as you draw instead of doing the full calculation again. For example; for drawing a rectangle you might end up with something vaguely like "for each horizontal line in rectangle { memset(address, colour, width); address += bytes_per_line; }".
However; you should also know that (for increasing the chance that your code will work on more different computers) you will need to support multiple different color depths and pixel formats; and if you have 10 different drawing functions (to draw characters, lines, rectangles, ..) and support 10 different colour depths/pixel formats; then it adds up to a 100 different functions. An easier alternative is to have a generic pixel format (e.g. "32-bit per pixel ARGB") and do all the drawing to a buffer in RAM using that generic pixel format and then have functions to blit data from the buffer in RAM to the frame buffer while converting the data to whatever the video mode actually wants.
I have a code which basically draws parallel coordinates using opengl fixed func pipeline.
The coordinate has 7 axes and draws 64k lines. SO the output is cluttered, but when I run the code on my laptop which has intel i5 proc, 8gb ddr3 ram it runs fine. One of my friend ran the same code in two different systems both having intel i7 and 8gb ddr3 ram along with a nvidia gpu. In those systems the code runs with shuttering and sometimes the mouse pointer becomes unresponsive. If you guys can give some idea why this is happening, it would be of great help. Initially I thought it would run even faster in those systems as they have a dedicated gpu. My own laptop has ubuntu 12.04 and both the other systems have ubuntu 10.x.
Fixed function pipeline is implemented using gpu programmable features in modern opengl drivers. This means most of the work is done by the GPU. Fixed function opengl shouldn't be any slower than using glsl for doing the same things, but just really inflexible.
What do you mean by coordinates having axes and 7 axes? Do you have screen shots of your application?
Mouse stuttering sounds like you are seriously taxing your display driver. This sounds like you are making too many opengl calls. Are you using immediate mode (glBegin glVertex ...)? Some OpenGL drivers might not have the best implementation of immediate mode. You should use vertex buffer objects for your data.
Maybe I've misunderstood you, but here I go.
There are API calls such as glBegin, glEnd which give commands to the GPU, so they are using GPU horsepower, though there are also calls to arrays, other function which have no relation to API - they use CPU.
Now it's a good practice to preload your models outside the onDraw loop of the OpenGL by saving the data in buffers (glGenBuffers etc) and then use these buffers(VBO/IBO) in your onDraw loop.
If managed correctly it can decrease the load on your GPU/CPU. Hope this helps.
Oleg
Sorry if this is off-topic here. If so; please feel free to move it to the appropriate site.
How does GDI/GDI+ render to the graphics card (display content on the screen) without the use of a lower-level API for communicating with the GPU such as DirectX or OpenGL? How does it draw to the screen without the use of either API? Yes; I know that the image is composited and rendered on the CPU, but then it SOMEHOW has to be sent to the GPU before being displayed on the monitor. How does this work?
GDI primitives are implemented by the video card driver. The video driver is provided by the GPU manufacturer, and communicates with the GPU using the proprietary register-level interface, no public API needed at this level.
Contrary to what you claim to know, the image is generally not fully rendered and composited on the CPU. Rather, the video driver is free to use any combination of CPU and GPU processing, and usually a large number of GDI commands (especially bit block transfers, aka blitting) are delegated to the GPU.
Since the proprietary interface has to be powerful enough to support the OpenGL client driver and DirectX driver, it's no surprise that the GDI driver can pass commands to the GPU for execution.
Early during boot (and Windows install) when no manufacturer-specific driver is available, the video API does perform all rendering in software and writes to the framebuffer, which is just the memory area which feeds the GPU RAMDAC and mapped into the CPU address space. The framebuffer is stored in one of several well-known formats (defined by VESA).
Is it possible to allocate some memory on the GPU without cuda?
i'm adding some more details...
i need to get the video frame decoded from VLC and have some compositing functions on the video; I'm doing so using the new SDL rendering capabilities.
All works fine until i have to send the decoded data to the sdl texture... that part of code is handled by standard malloc which is slow for video operations.
Right now i'm not even sure that using gpu video will actually help me
Let's be clear: are you are trying to accomplish real time video processing? Since your latest update changed the problem considerably, I'm adding another answer.
The "slowness" you are experiencing could be due to several reasons. In order get the "real-time" effect (in the perceptual sense), you must be able to process the frame and display it withing 33ms (approximately, for a 30fps video). This means you must decode the frame, run the compositing functions (as you call) on it, and display it on the screen within this time frame.
If the compositing functions are too CPU intensive, then you might consider writing a GPU program to speed up this task. But the first thing you should do is determine where the bottleneck of your application is exactly. You could strip your application momentarily to let it decode the frames and display them on the screen (do not execute the compositing functions), just to see how it goes. If its slow, then the decoding process could be using too much CPU/RAM resources (maybe a bug on your side?).
I have used FFMPEG and SDL for a similar project once and I was very happy with the result. This tutorial shows to do a basic video player using both libraries. Basically, it opens a video file, decodes the frames and renders them on a surface for displaying.
You can do this via Direct3D 11 Compute Shaders or OpenCL. These are similar in spirit to CUDA.
Yes, it is. You can allocate memory in the GPU through OpenGL textures.
Only indirectly through a graphics framework.
You can use OpenGL which is supported by virtually every computer.
You could use a vertex buffer to store your data. Vertex buffers are usually used to store points for rendering, but you can easily use it to store an array of any kind. Unlike textures, their capacity is only limited by the amount of graphics memory available.
http://www.songho.ca/opengl/gl_vbo.html has a good tutorial on how to read and write data to vertex buffers, you can ignore everything about drawing the vertex buffer.