Multi-monitor 3D Application - c++

I've been challenged with a C++ 3D application project that will use 3 displays, each one rendering from a different camera.
Recently I learned about Ogre3D but it's not clear if it supports output of different cameras to different displays/GPUs.
Does anyone have any experience with a similar Setup and Ogre or another engine?

At least on most systems (e.g., Windows, MacOS) the windowing system creates a virtual desktop, with different monitors mapped to different parts of the desktop. If you want to, you can (for example) create one big window that will cover all three displays. If you set that window up to use OpenGL, almost anything that uses OpenGL (almost certainly including Ogre3D) will work just fine, though in some cases producing that much output resolution can tax the graphics card to the point that it's a bit slower than usual.
If you want to deal with a separate window on each display, things might be a bit more complex. OpenGL itself doesn't (even attempt to) define how to handle display in multiple windows -- that's up to a platform-specific set of functions. On Windows, for example, you have a rendering context for each window, and have to use WGLMakeCurrent to pick which rendering context you draw to at any given time.
If memory serves, the Windows port of Ogre3D supports multiple rendering contexts, so this shouldn't be a problem either. I'd expect it can work with multiple windows on other systems as well, but I haven't used it on any other systems, so I can't say with any certainty.
My immediate guess, however, is that the triple monitor support will be almost inconsequential in your overall development effort. Of course, it does mean that you (can tell your boss) need a triple monitor setup for development and testing, which certainly isn't a bad thing! :-)
Edit: OpenGL itself doesn't specify anything about full-screen windows vs. normal windows. If memory serves, at least on Windows to get a full screen application, you use ChangeDisplaySettings with CDS_FULLSCREEEN. After that, it treats essentially the entire virtual desktop as a single window. I don't recall having done that with multiple monitors though, so I can't say much with any great certainty.

There are several things to be said about multihead support in the case of OGRE3D. In my experience, a working solution is to use the source version of Ogre 1.6.1 and apply this patch.
Using this patch, users have managed to render an Ogre application on a 6 monitors configuration.
Personnaly, I've successfully applied this patch, and used it with the StereoManager plugin to hook up Ogre applications with a 3D projector. I only used the Direct3D9 backend. The StereoManager plugin comes with a modified demo (Fresnel_Demo), which can help you to set up your first multihead application.
I should also add that the multihead patch is now part of the Ogre core, as of version 1.7. Ogre1.7 was recently released as a RC1, so this might be the quickest and easiest way to have it working.

Related

Get image data of known process

For Imageprocessing I want to get all pixel information from a given process.
Concrete its for testing an image hashing algorithm for identifying hearthstone cards, so i need to get a screenshot of the given process.
How can I solve it in windows?
My idea so far:
Get the process name.
Get the process ID
Get Window Handle
I have no idea how to go further from this point.
I hope it understandable what I want to achieve.
Unfortunately, there is no general method for getting the pixels of a particular window that I would be aware of. Depending on how the target application draws itself, this task can be very simple or very complicated. If we were talking about an application that uses good old GDI, then you could just get yourself an HDC to the window via GetWindowDC() and BitBlt/StretchBlt the content over into a bitmap of your own.
Unfortunately, the target application in your case appears to be a game. Games typically use 3D graphics APIs like Direct3D or OpenGL for drawing. Assuming that you cannot simply modify the target application to just send the desired data over to you out of its own free will, the only way to specifically record output from such applications that I'm aware of is to hook into the graphics API and capture the data from underneath the API. This can be done. However, implementing such a system is quite involved. There might be existing libraries to aid with writing such applications, but I don't know any that I could recommend here. If you don't have to capture the game content in real-time, you could just use a screen recording application to, e.g., record a video and then use that video as input for your algorithm. There are also graphics debugging tools like NSight Graphics or RenderDoc that you could use. Be aware that games, particularly online games, these days often have cheat protection systems that are likely to get very angry at you if you attempt to hook into the game…
Apart from all that, one alternative approach might be to use DXGI Output Duplication to just capture the entire desktop. While you won't be able to target one specific application (as far as I know), this would potentially have several advantages: First of all, it's only moderately complex to set up compared to a fully-fledged API-hook-based approach. Second, it should work regardless of what API the target application uses and even if the application is in fullscreen mode. Third, since you will have the data delivered straight from the operating system, you shouldn't have any issues with cheat protection. You can use MonitorFromWindow() to get the monitor your target window appears on and then enumerate all outputs of all DXGI adapters to find the one that corresponds to that HMONITOR…

Draw Graphics w/o Desktop Environment C++?

Okay, this is a really strange question and I'm not sure how to phrase but, but I can't seem to find anything on it anywhere, most likely because I'm not using the correct terminology. Also, this may be operating system specific, if it is, I'm using Debian.
Basically, when you boot an older computer or a modern server computer, or stuff along those lines, they boot to a terminal screen. Where all you do is type stuff. And if you want to do anything graphically, you usually download a desktop environment.
But I'm wondering, how could I go about drawing graphics without a desktop environment?
I remember back on MS-DOS you could use QBASIC to change the screen mode and you could then draw colored lines onto the screen like that. It's probably much more complicated in C++, but I'd still like to be pointed in the right direction.
Sorry if this question is a bit unspecific, but I'd really like to be pointed in the right direction.
This is done by using a framebuffer console. Then you use a framework/library that can draw on that. For example DirectFB. There's also some small libraries floating around, like libFB. I think SDL can also use the framebuffer. Never tried it myself though.
Then there's framebuffer versions of GUI toolkits like Gtk+ and Qt, if GUI widgets is that you want.
There's also SVGAlib, which talks to graphics cards directly, but it's outdated by now. Not recommended. In general, you're looking for "Linux framebuffer graphics". That should get you a few starting points.
To get a framebuffer console, you need to configure your kernel accordingly. Usually you enable a KMS driver for you graphics card, and also enable the KMS framebuffer. If there isn't a KMS driver for your card, you can use a generic VESA framebuffer console that works on most hardware (although, it being just generic VESA, is slow and non-accelerated.)
Commonly, a "desktop environment" (on Linux) is made of two parts: XWindow-like graphics "library" plus a "window management" (Gnome, KDE, Xcfe,..). So, if I understand your question, you only have to setup a XWindow system without a window manager.
On MS-DOS, you could write software which wrote to the screen, either by writing into a range of RAM which was shared by the video controller, or calling a BIOS API.
A newer O/S (i.e. Windows) will prevent you from doing either of those: instead you call an O/S API, which calls to an O/S-specific video device driver, which outputs to the hardware.
As I read it you're asking how to deal directly with the graphics hardware.
That depends on the hardware.
If you have an old PC at hand and want to experiment with it, then you need correspondingly old development software that can run on that hardware under the particular OS, i.e. some C compiler from those days running in MS-DOS. You may be able to do this is in a "DOS-box" in Windows (not a console window but an emulation of the old PC). 64-bit Windows 7 does not support DOS boxes, but there is a free alternative called DOSbox.
Then, if you go that route, you can search for "graphics adapter" graphics modes etc. on the net.
Basically, with the old PC architecture and a program running under DOS, you used a DOS service to switch the graphics mode, and then you accessed the graphics memory at a known memory address for the mode.
The curses (or ncurses) library is the old way of doing it in Unix flavours, although these days there is probably something better...

Cross Platform GUI - Rendering Process

I have been using a few cross-platform GUI libraries (such as FLTK, wxWidgets, GTK++), however I feel like none fulfil my needs as I would like to create something that looks the same regardless of the platform (I understand that there will be people against building GUI's that don't have a native look on the platforms but that's not the issue here).
To build my controls, I usually rely on basic shapes provided by the library and make my way up binding & coding everything together...
So I decided to give it a try and do some opengl for 2D GUI programming (as it would still be cross-platform. With that in mind, I couldn't help to notice that the applications that I have written using wxWidgets & FLTK usually have a average RAM consume of 1/2MB, whereas a very basic openGL window with a simple background ranges from 6 to 9 MB.
This brings me to the actual question for this thread,
I thought that all the rendering of the screen was made using either opengl/direct (under the covers).
Could someone please explain or link me some sort of article that could give me some insight of how these things actually work?
Thanks for reading!
These multiplatform toolkits usually support quite a lot of backends which does the drawing. Even though some of the toolkits support OpenGL as their backend, the default is usually the "native" backend.
Take a look eg. at Qt. On Windows it uses GDI for drawing for its native backend. On linux it uses XRender I think. Same on Symbian and Mac. Qt also has its own software rasterizer. And of course there is an OpenGL backend.
So why the application using some of these GUI toolkits can take less memory than a simple OpenGL application? If the toolkit use the "native" backend, everything is already loaded in memory, because it is very likely that all visible GUI uses the same drawing API. The native APIs can also use only one buffer representing a whole screen in which all applications can draw.
However when using OpenGL you have your own buffer which represents the application window. Not to mention that an OpenGL application usually has several framebuffers, like z-buffer, stencil buffer, back buffer, which are not essential for 2D drawing, but they take some space (even though its probably the space in graphics card memory). Finally, when using OpenGL, it is possible that the necessary libraries are not yet loaded.
Your question is exceedingly vague, but it seems like you're asking about why your GL app takes up more memory than a basic GUI window.
It's because it's an OpenGL application. This means it has to store all of the machinery needed to make OpenGL work. It means it needs a hefty-sized framebuffer: back buffer, z-buffer, etc. It needs a lot of boilerplate to function.
Really, I wouldn't worry about it. It's something every application does.

Capture window content to texture

let me first specify my development essentials. I am writing an Windows DLL. The programming language i do focus on is C/C++. Asm blocks are possible aswell when required for my task. Maybe even a driver, but i do not have any experience with them at all.
The DLL is being injected into a host process. That's always a Directx environment. Either Dx9, Dx10 or Dx11 and may run in fullscreen or windowed mode.
The method should support windows xp up to windows 7 and is being compiled in x86 only.
The goal is to come up with a function taking a screenshot of a given process-window. The screenshot is never being taken from the host process itself. Its always another process! The window may contain directx or gdi32 content. Maybe other contents are possible i do not think of at the moment (windows forms comes to my mind. i am not sure how that is being rendered internally). The windows may be minimized.
That screenshot needs to be accessable/convertable to an directx texture such as Texture2D, depending on the Directx environment i am working in. Saving the screenshot as an png/bmp is enough thoe, as i do know how to create such a texture from memory.
I've already tried the oldstyle BitBlt way, that didnt work on minimized applications thoe. The minimized applications are being drawn, when i send WM_PAINT messages to the targeting window. That aint a solution for me, as i also need to keep up with directx applications which doesnt react to such messages.
Maybe i need to hook each single DirectX window to accomblish my task, to access the backbuffer directly, i do hope for some better methods anyways.
For the reason that i do take a lot of screenshots from multiple windows, i would like to implement a fast method, which isnt such a cpu bogus. Copying from VideoRAM may be a bad way to go when having such performance needs.
I do hope for some ideas, maybe code samples as i am not familar with all the possibilities i could go for. I've looked at some windows thumbnail api, but that didnt support xp from what i could read.
Thanks in advance,
Frank

Cross platform hardware-native OpenGL library, possibly with multimedia?

I am looking for the ability to open OpenGL contexts and draw native OpenGL in windows, macOS, linux distros with X, android and iOS
I don't want to rely on the "native" device framework for the actual UI, I don't need to use native components, all I want is an OpenGL context to natively draw in OpenGL. Many of the cross platform SDKs like Marmalade and MoSync focus on making use of the native UI components and stuff like that, all I need is an OpenGl context to draw as I intend to, absolutely no native UI functionality is required, however, access to native hardware features like microphone, camera and other sensors is desired if possible, as well as access to audio/video/network.
I don't want to use QT, I want to do something that is closer to the hardware to work on the low level. The general idea is to make a lightweight cross platform hardware accelerated GUI, written on a level low enough to be truly hardware native, without relying on any native software framework. I know for android I may have to use a java wrapper to launch the native code, but the idea is to have this wrapper minimized, with very little modifications needed to deploy the low level and thus hopefully TRULY cross-platform code, that is only dependent on OpenGL hardware and OpenGL context for it to work wit.h
So I need a bare minimum solution to avoid using non-cross platform features as much as possible.
At the time the only library that comes to my mind is SDL, but I am not sure it supports android and iOS property, so besides library recommendations, more information on how SDL handles android and iOS devices and their hardware is welcomed too.
How about:
GLFW
SDL
GLUT (FreeGLUT or OpenGLUT)
Blender's GHOST framework?
Essentially they open a window, create a OpenGL context on it, deliver you the input events and leave the rest up to you.
What you want is nothing more than a platform-specific OpenGL context setup (which is quite simple and well documented: the NeHe OpenGL tutorial provides code for many environments that does just this here (the explanation is Windows specific, scroll down for the code on different OSes).
Once you have the OpenGL context, nothing prevents you from creating a full GUI with all OpenGL elements.
If you want, you could use Qt to only set up an OpenGL context (ie don't use any QWidgets or anything, other than the window showing you your OpenGL scene). It takes care of the whole setup process, but for only that, Qt becomes a huge dependency, as it only really replaces at most 100 lines of code per platform.
With regards to SDL+Android, have you checked the README?
And for iOS check the same file here.