How can I draw a pixel array very fast in c++?
I've seen many questions like this on stackoverflow,
they are all answered with:
use gdi (windows)
use opengl
...
but there must be a way, how opengl is doing it!
I'm writing a little raytracer and need to draw every pixel
many times per second.
opengl is able to do it, platform independent and fast,
so how can i achieve that without opengl?
And "without opengl" dos not mean
use sdl (slow)
use this / that library
Please only suggest the platform native methods
or the library closest to that.
If it is possible (i know it is)
how can I do this?
platform independent solutions are preferred.
Drawing graphics on Linux you either have to use X11, or OpenGL. (And in the near future Wayland may be another option). In Linux there is no "native" way of doing graphics, because the Linux kernel doesn't care about graphics APIs. It provides a interfaces (DRM) using which graphics systems are then implemented in user space. If you just want to splat pixels on the screen, without caring about windows then you could also mmap /dev/fbdev – but you normally don't want that, because nobody wants his screen being clobbered by some program he can't move or hide.
Drawing single points is inefficient, no matter which API being uses, due to the protocol overhead.
So X11 it is. So the best bet is to use the MIT-SHM extension which you use to alter pixels in a buffer, which is then blitted in whole by the X11 server. Of course doing this using the pure X11 Xlib functions is annoyingly cumbersome. So this is what SDL effectively nicely wraps up for you.
The other option is OpenGL. OpenGL is not a library! It's a system level API, that gives you almost direct access to the GPU. And it integrates nicely with X11. Yes, the API is provided through a library that's being loaded, but technically that library is just a "wrapper" or "interface" to the actual driver. Drawing single points with OpenGL makes no sense. But you can "batch up" several points into a list (using a vertex array) and then process that list. So the idea is to collect all the incoming points between two display refresh intervals and draw them in one single batch.
platform independent solutions are preferred.
Why are you asking about native APIs then? By definition there can be no plattform independent native API. Either you're native, or you're plattform independent.
And in your particular scenario I think SDL would be the best solution, because it offers just the right kind of abstraction and program side interface for a raytracer. Just FYI: Virtual Machines like QEmu use SDL.
Or you use OpenGL which is a real plattform neutral API widely supported.
Drawing graphics on Linux you either have to use X11, or OpenGL.
This is absolutely false! Counterexample: there's platforms that don't run X11, yet they display pixels (eg. fonts).
Sidenote. OpenGL usually depends on X11 (it's possible, albeit hard, to run OpenGL without X11).
As #datenwork says, there's at least 2 other ways to draw pixels:
The framebuffer device (fbdev), an abstraction to interface with graphics hardware. Very old, designed by Martin Schaller, see the kernel docs. Source code is here. Also see here. Here's the simplest possible framebuffer driver.
The Direct Rendering Manager (DRM), a kernel subsystem that provides an API for userland apps to send commands/data directly to the GPU. (Seems suspiciously similar to what OpenGL does, but idk!). Source code is here. Here's a DRM example that inititializes a simple display pipeline.
Both of these are part of the kernel, so they're lower-level than X11, which is not part of the kernel. Both can draw arbitrary pixels (eg. penguins). I'd guess both of these are platform-independent (like OpenGL).
See this for more on how to draw stuff on Linux.
Related
Motivation - Write a program in C (and Assembly, if required) to color a rectangular area in the screen red.
STRICT requirements - GNU/Linux running with the bare minimum utilities and interfaces in text/console mode. So no X (or equivalent like Wayland/Mir), no non-default (outside POSIX, LSB, etc. provided by the kernel) library or interface and no extra assumptions except the presence of the device driver for the monitor.
Effectively, what I am looking for is information on how to write a program which will eventually send a signal through the VGA port and cable to the monitor to color a particular portion of the screen red.
Apologies if this sounds rude, but no "Why do you want to do this?" or "Why don't you use ABC library?" answer. I am trying to understand how to write an implementation of the X server or a kernel framebuffer (/dev/fb0) library for example. It is ok to provide a link to the source of a C library.
no extra assumptions except the presence of the device driver for the monitor.
That means you can use X or Wayland, because those are the graphics driver infrastructure on Linux.
Linux (the Kernel) by itself doesn't contain any graphics primitives. It provides some interfaces to talk to the GPU, allocate memory on it and configure the on-screen framebuffer. But except raw framebuffer memory access the Linux kernel does have no way to perform drawing operations. For that you need some infrastructure in userspace.
Wayland builds on top of DRI2, which in turn talks to the DRM Kernel-API. Then you require GPU dependent state tracker. Mesa has state trackers for a number of GPUs and provides OpenGL and OpenVG frontends.
The NVidia and ATI propiatary, closed source graphics drivers are designed to work with X only. So with those to make use of the GPU you must use X. That's the way it is.
Outside of that you can manipulate the on-screen framebuffer memory through /dev/fbdev, but that's mere pixel pushing, without any GPU acceleration.
Once upon a time we we had svgalib (or was it called vgalib?) which did exactly what you are trying to do. I would recomend that you look at its source code. I don't know if it is still possible to find it anywhere, or if it would actually work with a modern kernel. Just whatever you do, be prepared to reboot often.
For whatever it's worth, for any future viewers, I have found a decent tutorial at http://betteros.org/tut/graphics1.php. It goes through Framebuffer, DRI/DRM, and X Windows at "the lowest level possible" (basically ioctls and file read/write).
I have gotten both the Framebuffer and DRI/DRM examples to work on both QEMU Debian (on a MacOS Host) and a Raspberry Pi, with a bit of modification for the latter.
The graphical user interface hides mysterious mechanics under its curtain. It mixes 2D and 3D contexts on a single screen and allows for seamless composition of these two, much different worlds. But in what way and at which level are they actually interleaved?
Practice has shown that an OpenGL context can be embedded into a 2D widget library, and so the whole 2D interface can be backed with OpenGL. Also some applications may explore hardware acceleration while others don't (while being rendered on the same screen). Does the graphic card "know" about 2D and 3D areas on the screen and the window manager creates the illusion of a cohesive front-end? ...one can notice accelerated windows (3D, video) "hopping" to fit into 2D interface when, e.g. scrolling a web page or moving an video player across the screen.
The question seems to be trivial, but I haven't met anybody able to give me a comprehensive answer. An answer, which could enable me to embed an OpenGL context into a GTK+ application and understand why and how it is working. I've tried GtkGlExt and GLUT, but I would like to deeply understand the topic and write my own solution as a part of an academic project. I'd like to know what are the relations between X, GLX, GTK, OpenGL and window manager and how to explore this network of libraries to consciously program it.
I don't expect that someone will write here a dissertation, but I will be grateful for any indications, suggestions or links to articles on that topic.
You're thinking much, much much too complicated. Toolkits like GTK+ or Qt add quite a layer of abstraction over somthing, that's actually rather simple: Your system's graphics device consists of a processor and some memory it can operate on. In the simplemost case the processor is the regular system CPU and the memory is the normal system memory. Modern computers feature a special purpose graphics processor (GPU), though, which has its own, high bandwidth memory.
The memory holds framebuffers. Logically a framebuffer is a 2D array of values. The GPU can be programmed to process the values in the framebuffers in a certain way. That can be used to draw into framebuffers. The monitors, displaying a picture are connected to a special piece of circuitry which continuously feeds the data of a certain framebuffer in the memory to the screen (usually called RAMDAC or CRTC). So in the GPU's memory there's a framebuffer that's directly going to the screen. If you draw there, things will appear on the screen.
A program, like the X11 server can load drivers that "know" how to program the GPU to draw graphical primitives. X11 itself defines certain graphics primitives, and extension modules can add further ones. X11 itself allows to segregate the framebuffers on the GPU memory into logical areas called Drawables. Drawables on the on-screen framebuffer are called Windows. Since logical Windows can overlap the X server also manages Z stacking, which it uses to sort the Windows for redraw. Everytime a Client wants to draw to some Window that X11 server will tell the GPU, that drawing operations will modify only those pixels of the framebuffer, of which the Window drawn to is visible (this is called "Pixel Ownership Test"). The X11 server will also create Drawables (i.e. framebuffers) that are not part of the on-screen framebuffer memory area. Those are called PBuffers or Pixmaps in X11 terminology (also with a special extension its possible to move a Window off-screen as well).
However all those Drawables are just memory. Technically those are Canvas to draw on with something. This something is called "graphics primitives". X11 itself provides a certain set, named X core. Also there's a de-facto standard extension called XRender which provides primitives not found in X core. However neither X11 core nor XRender provide graphics primitives with which the impression of a 3D drawing could be generated. So there's another extension, called GLX which teaches the X11 server another set of graphics primitives, namely in the form of OpenGL.
However X core, XRender and GLX/OpenGL are all just different pens, brushes and pencils that all operate on the same kind of Canvas, namely a simply framebuffer manages by X11.
And what do toolkits like Qt or GTK+ then? Well, they use X11 and the graphics primitives it provides to actually draw widgets, like Buttons, Menus and stuff like that, which X11 doesn't know about.
First off, let me just apologize right off the bat in case this is already answered, because I might just be searching it under irregular search terms.
I am looking to draw 2D graphics in an application that uses DirectX to draw its own graphics (A game). I will be doing that by injecting a DLL into the application (that part I have no questions about, I can do that), and drawing my graphics. But not being really good at DirectX/OpenGL, I have a couple of fundamental questions to ask.
1) In order to draw graphics on that window, will I need to get a pre-existing context from the process memory, some sort of handle to the drawing scene?
2) If the application uses DirectX, can I use OpenGL graphics on it?
Please let me know as to how I can approach this. Any details will be appreciated :-)
Thank you in advance.
Your approach in injecting an DLL is indeed the right way to go. Programs like FRAPS use the same approach. I can't tell you about the method for Direct3D, but for OpenGL you'd do about the following things:
First you must Hook into the functions wglMakeCurrent, glFinish and wglSwapBuffers of opengl32.dll so that your DLL notices when a OpenGL context is selected for drawing. Pass their calls through to the OS. When wglMakeCurrent is called use the function GetPixelFormat to find out if the window is double buffered or not. Also use the glGet… OpenGL calls to find out which version of OpenGL context you're dealing with. In case you have a legacy OpenGL context you must use different methods for drawing your overlay, than for a modern OpenGL-3 or later core context.
In case of a double buffered window use your Hook on wglSwapBuffers to perform further OpenGL drawing operations. OpenGL is just pens and brushes (in form of points, lines and triangles) drawing on a canvas. Then pass through the wglSawpBuffers call to make everything visible.
In case of a single buffered context instead of wglSwapBuffers the function to hook is glFinish.
Draw 2D with OpenGL is as simple as disable depth buffering and using an orthographic projection matrix. You can change OpenGL state whenever you desire to do so. Just make sure you restore everything into its original condition before you leave the hooks.
"1) In order to draw graphics on that window, will I need to get a pre-existing context from the process memory, some sort of handle to the drawing scene?"
Yes, you need to make sure your hooks catch the important context creation functions.
For example, all variations of CreateDevice in d3d are interesting to you.
You didn't mention which DirectX you are using, but there are some differences between the versions.
For example, At DirectX 9 you'd be mostly interested in functions that:
1. Create/return IDirect3DSwapChain9 objects
2. Create/return IDirect3DDevice9,IDirect3DDevice9Ex objects
In newer versions of DirectX their code was splitted into (mostly) Device, DeviceContext, & DXGI.
If you are on a "specific mission" share which directx version you are addressing.
Apart from catching all the needed objects to allow your own rendering, you also want to catch all presentation events ("SwapBuffers" in GL, "Present" in DX),
Because that's time that you want to add your overlay.
Since it seems that you are attempting to render an overlay on top of DX applications, allow me to warn you that making a truly generic solution (that works on all games) isn't easy.
mostly due to need to support different DX versions along with numerous ways to create
If you are focused on a specific game/application it is, naturally, much easier.
"2. If the application uses DirectX, can I use OpenGL graphics on it?"
Well, first of all yes. It's possible.
The terminology that you want to search for is OpenGL DirectX interoperability (or in short interop)
Here's an example:
https://sites.google.com/site/snippetsanddriblits/OpenglDxInterop
I don't know if the extension they used is only available in nVidia devices or not - check it.
Another thing about this is that you need a really good motivation in order to do it, generally I would simply stick with DX for both hooking and rendering.
I assume that internal interop between different DX version is better option.
I'd personally probably go with DirectX9 for your own rendering code.
Of course, if you only need to support a single DirectX version, no interop needed.
Bonus:
If you ever need to generate full wrappers of C++ classes, a quick n' dirty dll wrapper, or just general global function hook, feel free to use this lib that i created:
http://code.google.com/p/hookit/
It's far from a fully tested tool, just something i hacked 2 days, but I found it super useful.
Note that in your case, i recommend just to use VTable hooking, you'll probably have to hardcode the function offset into the table, but that's not likely to change.
Good luck :)
I know GLUT's quadrics, I used it in a few programs when I was in school. Now I'm working on a real world application and I find myself in need of drawing some geometric primitives (cubes, spheres, cylinders), but now I also know that GLUT is a no longer supported and it's last update was in like 2005. So I'm wondering if there's anything other than GLUT's quadrics to draw such geometric shapes. I'm asking if there's anything made before I go ahead and start making my own from vertices arrays.
Yes, you can! You can use the native API of the OS to create a window with OpenGL capabilities.
The advantage of GLUT is that is makes this task easier and is a cross-platform solution.
There are other cross-platform libraries that are more complex to work with but provide the same functionality, like Qt.
NeHe has a huge amount of examples that use several different technologies to accomplish what you are looking for. Check the bottom of the page.
Here is a demo for Windows that creates a window and draws a simple OpenGL triangle inside it. This demo removes all the window frame to give the impression that a triangle is floating on the screen. And here is a similar demo for Linux.
GLUT is just some conveniece framework that came to life way after OpenGL. The problem is not, that GLUT is unmaintained. The problem is, that GLUT was not and never will be meant for serious applications.
Then there's also GLU providing some primitives, but just as GLUT it's merely a companion library. You don't need either.
The way OpenGL works is, that you deliver it arrays of vertex attributes (position, color, normal, texture coordinates, etc.) and tell to draw a set of primitives (points, lines, triangles) from those attributes from a second array of indices referencing into the vertex attribute arrays.
There used to be the immediate mode in versions prior to OpenGL-3 core, but that got depreceated – good riddance. It's only use was for populating display lists which used to have a slight performance advantage if one was using indirect GLX. With VBOs (server (=GPU) side vertex attribute storage) that's no longer an issue.
While GLUT has not been maintained, FreeGLUT has. There are still several alternatives though.
GLFW is a cross-platform windowing system which is easy to get up and running, and also provides the programmer with control of the main application loop.
SFML has support for many languages and also integration capabilities with other windowing schemes, in addition to being cross-platform.
Finally, Qt is another, popular, cross-platform windowing framework.
Now I'm working on a real world application and I find myself in need of drawing some geometric primitives (cubes, spheres, cylinders),
Actually, I don't remember anything except glut that would provide generic primitives. This might have something to do with the fact that those generic primitives are very easy to implement from scratch.
You can use other libraries (libsdl, for example, or Qt) to initialize OpenGL, though.
Most likely if you find generic library for loading meshes (or anything that provides "Mesh" object), then it will have primtives.
is a no longer supported and it's last update was in like 2005
Contrary to popular belief, code doesn't rot and it doesn't get worse with time. No matter how many years ago it was written, if it still works, you can use it.
Also there is FreeGLUT project. Last update: 2012.
For example in some games there are 3 different display mode there are
OpenGL
DirectX
Software
What is this software mode? Like, how do programmers make game engine that generates images without using OpenGL or DirectX are there classes in C++ that generates frames?
Software means exactly that: software.
All rendering is is coloring pixels via some algorithm. That algorithm can be done by dedicated hardware, but you could simply implement those functions yourself in actual code. Now, that doesn't mean it's particularly fast; it takes a great deal of skill to implement a triangle rasterizer that has decent speed.
Software Mode can mean two things:
A System-provided Emulation layer. For example DX11 provides the WARP-device where you, as the application programmer, just specify "I want to use WARP" and the rest is done by DirectX. The Emulation Layer basically does Option Number 2:
Do it all by hand. Essentially a hardware accelerated GFX-card mostly only draws triangles. You can write a function that draws the pixels of a textured triangle directly into the screen-memory of the graphics-card. It's not very fast nowadays (that's why hardware-accelerated gfx-cards exist), but that's how it was done in the 80s and 90s when no such cards existed yet.
For a rough examplanation how a texture mapper works just look into the wikpedia article: https://en.wikipedia.org/wiki/Texture_mapping
I'm not aware of any gfx-libs that provide an own software layer, but i'm sure they exist somewhere.
As an example, directx has a layered setup, there is the code interface, which interacts with the HAL, or hardware abstraction layer. Depending on the capabilities of the underlying hardware, the HAL might run some peices of code on the CPU because the drivers reported the GPU doesn't support that feature. (Yes I know this a gross oversimplification)
see: http://msdn.microsoft.com/en-us/library/gg426101(v=vs.85).aspx
and: http://www.codeproject.com/KB/graphics/DirectX_Lessons_2_.aspx