I was thinking about a project for quiet a while that would require the extraction of the color and depth buffer from OpenGL applications, in particular games. It has absolutely nothing to do with modding, in terms of manipulating the game itself or is intended for "cheating" purposes but more just for data gathering.
So now I'm trying to figure out possible ways to accomplish it. Of course being able to do it with Direct3D under Windows would even lead to more available applications but as I'm pretty familar with OpenGL under Linux, I would start this way.
As there are many modding/cheating applications that actually manipulate the color/depth buffer of video games in different kinds (e.g. wallhacks in ego shooters), it seems that this definitely has to be possible somehow.
Now the question is, what would be the best way to accomplish this? Reading out the GPU memory directly would most probably not work according to this thread as memory mapping in OpenGL is completely dependent on vendor implementation and there is no trivial way to get the VRAM addresses of the corresponding data.
Alternative approaches I can think of might now be categorized as extravenous or intravenous ones:
Extravenous: extract the OpenGL context from a process and access buffers, shaders, etc. from a third application without really manipulating the target applications binary directly.
Intravenous: manipulate the target applications binary/code in such a way, that it writes the correspondings buffers/data either to a specific place in the memory or directly saves it somewhere.
Latter approach should definitely work, but might be associated with a larger effort and would need to be done per application. Hence first would be definitely preferred, but is the described way even applicable at all? Is it just possible to access OpenGL ressources from different processes when you have the OpenGL context value of another? Does anyone has experience with this?
Update:
After some research I found out that what I was looking for is pretty common and called "Hooking" or "Interception" in general.
For OpenGL and Direct3D there are many different libraries and programs to do this:
glintercept: OpenGL # Windows
Indicium-Supra: DirectX
apitrace: OpenGL + Direct3D # Windows, macOS, Linux
D3D9Interceptor: Direct3D
Nvidia Nsight: Direct3D + OpenGL + Vulkan # Windows, Linux
and many others.
The de facto standard way of doing this would be to hook yourself into the process and redirect the graphics API calls the application (game) makes to your own code. Your hook can then record whatever data it needs and perform whatever action it wants before passing on the call to the actual API implementation. There are many ways of doing this with different pros and cons, ranging from building a fake library with the same interface and tricking the game into loading that one instead of the actual graphics library (DLL injection) to modifying the machine code in the loaded process image to make function calls jump into your code. Which approaches are applicable and best for your case will highly depend on many factors such as target platform, target applications, the API you want to hook into, and so on.
The main issue with any such approach, however, is that that's exactly how many cheats work. Thus, many video games will come with built-in protection to prevent precisely this kind of stuff from working. With many online games, you might even risk having your account suspended for suspected cheating by trying to do stuff like this. But that's just something to be aware of. In my experience, it will still work with many games, particularly single-player games.
"Extracting the OpenGL context from another process" will not work, at least not on any proper operating system. The whole point of having the process abstraction in the first place is to isolate applications from each other…
Since I don't have enough reputation to ask this in a comment, let me ask you here what exactly your goal is. Do you want to get this information once for a single frame or do you want to record this for many frames over a period of time? Do you need an automated solution in form of a custom application? If neither of those, you might be able to just use a graphics debugging tool like Nsight Graphics to capture and export the frame you want…
Related
OpenGL is great in creating UI (specially in games) and it is highly portable. Is it unusual that an ordinary (not graphically intensive) application uses OpenGL for its UI? And if not, why? Is it about performance or ease of use?
For example, an Apple developer can use ready to use buttons and sliders, etc provided by Apple; he can also create the UI using OpenGL. The second method makes the code more flexible and portable. Why people don't do this?
Does using OpenGL makes sense if portability is our goal?
There's a lot more to UI than graphics.
As one example fonts. Rendering Chinese, Japanese, Korean, Arabic, Thai in the right directions with the write strokes etc it a TON of work. There are whole teams dedicated to that topic at Microsoft, Google, Apple, Adobe and other companies. If you go straight to GL you're going to have to solve that problem yourself.
Another example, native controls. iOS users expect certain controls to work a certain way. Android users expect something different. For a game that's usually not a problem, games usually have a very unique UI. Apps on the other hand are generally expected to stick to the conventions of their target platform. Generally users get upset when the controls don't match their native platform. Using GL for your UI means you won't get the native controls.
Similarly text editing is a very platform specific feature. Is it drag to select, right click to select, hold to select? What keys go which way, how do they work? Is it Ctrl-V or ⌘-V. All of that is platform dependent as well. You can use the native text editing controls and have the problem solved or you can use GL and have to reproduce not only all the code to edit text but try to make it work the right way on each platform in each configuration. Does your GL text editor handle Japanese keyboards? German Keyboards? Does it handle Input Method Editors for CJK and others?
So, it's more a matter of the right tool for the right job. There are whole UI platforms written on top of GL. Before Metal OSX was probably one of them (or still is?) But if you skip that and build your UI directly on GL you'll end up having to implement all those in-between pieces.
Certainly GL might be the right way to go for certain non-game apps. Paper comes to mind as an app that could be 98% GL and therefore gain portablility. On the other hand Evernote is probably on the far other side. It needs to handle every language, different fonts, input for users with disabilities, etc. All of that is built into the OS, not GL.
Yes, what you suggest is possible. Just have a look at Blender. Blender implements its own UI using OpenGL, for the exact portability reasons you gave.
However there's a lot more to user interfaces than just getting things drawn to the screen. Event management, input handling, interoperability with other applications. All that depends on functions that are not covered by OpenGL.
But there are cross platform application framework libraries, like Qt. And the whole drawing stuff makes only a small portion of what those frameworks do.
One problem you run into when using OpenGL for drawing the GUI though is, that there's a huge variation on the OpenGL profiles supported by the systems out there. It can vary from a mere OpenGL-1.1 software fallback on a old Windows XP machine, over OpenGL-1.4 on Windows Vista machine with only the default drivers installed by the Windows setup, up to OpenGL-4.5 on the same machine once that user installs the proper drivers. And the way you use OpenGL-4.5 is largely incompatible to OpenGL-1.4.
For a UI toolkit written with a OpenGL backend this means that you must implement at least three codepaths: A OpenGL-1.1 variant that uses the fixed function pipeline and client side vertex arrays. A OpenGL-3 compatibility profile. And a OpenGL-4 core profile.
This much more work than just using the OS specific methods, which you have to use anyway to create the window and get user input with.
The title pretty much says it. I was thinking about making a simple video editor, and I was unsure about the "logistics" of various effects and filters and such. Let's say I want to make it possible for an external program to apply some effect to the image. Does it have to be an executed program necessarily, or can it simply be a set of OpenGL instructions that the video editor can parse, and essentially "pass along" to OpenGL. Technically, that would be a program, but it seems more elegant and standardized than making a full fledged secondary program just to apply an effect. Maybe there's a better way?
Edit: Thanks for the answers guys. Here's a follow up though: How do other video editors implement this? The reason I ask is because the answers seem to be rather negative on the above point, so I was wondering how it is done by professional applications.
Does it have to be an executed program necessarily, or can it simply be a set of OpenGL instructions that the video editor can parse, and essentially "pass along" to OpenGL.
There are several ways to approach the problem ("make video editor with ability to create custom effects").
Make your editor support plugins, and provide api for making new video effect plugins. Macromedia Director worked this way. New effect will have to be implemented as a plugin library (dll/.so, depending on your platform).
Embed scripting language with OpenGL bindings into your application for making video effects, provide basic functions for interacting with internal state of your application. This will allow faster development of effects, but performance will suffer greatly for certain operations. "Blender" (3d editor) works this way, Maya provides similar framework (I think). Basically, this is #1 but implemented as scripting language.
Make your editor load GLSL shaders. This will allow you to make some effects fairly quickly, but other than that it is not good idea - while GLSL has decent amount of "power" (functions/maths), GLSL alone won't allow you to make anything you want, because it won't be able to interact with your application at all (pure GLSL can't create textures, framebuffers, and so on, which will limit your ability to make something more interesting). Most likely you'll be only able to make some kind of filter, nothing more. #1 and #2 will give "power" to end user.
Implement node-based effect editor. I.e. there are several types of nodes user can drag around that represent certain operations, they have inputs/outputs and user can connect them. Blender 3D and UDK(Unreal Development Kit) have such feature. I think .kkreiger (site is down, google it) used similar technique to make 96kb 1st person shooter. This can be quite powerful, but programmers are very likely to hate dragging graphical entities around with mouse, unless you also provide the way to make such node graph via scripting language (AviSynth had something similar but not quite like that). This can be as powerful as #1 or #2, but will cost you extra development time.
Aside from that, there is no "language-independent" way to make effect. Either you have to use existing language to make effect plugin interact with your application, OR you have to make your own "language" to describe effect. The only language-independent way to deal with this is to hire a programmer and tell him what kind of effect you want. But then again, that requires natural language.
Let's say I want to make it possible for an external program to apply some effect to the image. Does it have to be an executed program necessarily, or can it simply be a set of OpenGL instructions that the video editor can parse, and essentially "pass along" to OpenGL.
Either thing works. However in the OpenGL standard itself there's no such thing like "OpenGL instructions" or "opcodes". But at least in X11 based systems with indirect GLX this is possible, because GLX actually defines opcodes. And it is possible for multiple X clients to operate on the same context if it is indirect. Unfortunately you won't have that option most of the time, as you probably want a direct context, because you may want OpenGL3 (for which not all operations have opcodes defines, which makes indirect impossible for OpenGL-3), or because you're not using GLX.
So the next option is, that you provide the other process with some kind of command/interpreter prompt for OpenGL. If you're lazy I suggest you just embedd a Python interpreter in your program, together with the Python OpenGL bindings. Those operate on whatever context that's currently active, allowing the other program to actually send some Python script to do its stuff.
And last but not least you could provide OpenGL through some RPC interface.
Or you provide some plugin system, where you load some DLL, which does the deed.
Sure, but it would be basically the equivalent of you creating your own language. You can accept OpenGL "instructions" from the user via some text interface (or if you want to somehow put something together as a GUI), then parse those "instructions", and the underlying implementation would execute those instructions in whatever language your application is written in.
The short answer is no. What you are looking for is a HCI guy's dream DSL which would describe presentation regardless of the underlying technology.
Probably not quite what you wanted, you might want to look at GLSL. GLSL is a C-like language that is compiled by the graphic card driver into native "graphic assembly language".
I'm looking to create a "driver" I guess for a custom file system on physical disk for Windows. I don't exactly know the best way to explain it, but the device already has proper drivers and everything like that for Windows to communicate with it, but what I want to happen is for the user to be able to plug the device in to their PC, have it show up in My Computer, and give them full support for browsing the device.
I realize it's probably a little scary thinking about someone who doesn't know the basics of doing something like this even asking the question, but I already have classes and everything constructed for reading it within my own app... I just want everything to be more centralized and without more work from the end user. Does anyone have a good guide for creating a project like this?
The closest thing I know of to what I understand from your description is an installable file system, like the Ext2 installable file system that allows Windows computers to work with
Linux originating ext2 (and to a certain degree ext3) filesystems.
Maybe that can serve as a starting point for your investigations.
As an alternative approach there's the Shell extension which is a lot less complicated than the IFS. The now-defunct GMail shell extension used that approach, and even though it's become nonfunctional due to changes in GMail, it can still serve as inspiration.
Your options are:
Create a kernel mode file system driver. 9-12 months of work for experienced developer.
Use a framework and do everything in user mode. A couple of weeks of work to get the prototype working. The only drawback of this approach is that it's slower, than kernel-mode driver. You can play with Dokan mentioned above, or you can use our Callback File System for commercial-grade development.
I think you need to look through the Windows Driver Kit documentation (and related subjects) to figure out exactly what you're looking to create.
If you're intending to rely on the drivers that already exist, i.e. you don't need to actually execute your code in kernel land to communicate with it, I would recommend you take a look at FUSE for windows Dokan
If you indeed need to run in kernel space, and communicate directly with the hardware, you probably want to download windows DDK (driver development kit). Keep in mind that drivers for communicating with a block device and filesystems are separated, and it sound like you're talking about the filesystem itself. I believe that anything you run in kernel space will not have access to the c++ runtime, which means you can only use a subset of c++ for kernel drivers.
We have created an OpenGL application in C++ which visualizes some physical simulations. The basic application is contained within a DLL which is used by a simple GUI. It currently runs on a desktop PC, but we have the idea to turn it into a web service.
Since the simulations require dedicated hardware, the idea is that a user, through his/her browser can interact with our application as a service and this service then renders the result to an image (jpg or anything appropriate) that can then be displayed/updated in the browser.
My question:
How can I "easily" turn a c++ application as described into a web-service that runs on some server so that I can approach it over the web? What kind of technologies/APIs should I look at? And are there any real-life examples that tackle a similar problem?
This is possible, but one major difficulty you'll have is trying to use OpenGL from a web service. You'll need to port this to do 100% offscreen rendering and work without a windowing context. That would be my first step, and it's not always a trivial one.
Also, it is very difficult to maintain and manage your opengl contexts from a webservice correctly, and the overhead involved can be quite painful. Depending on the number and types of renderings, you may run into some issues there.
If you need it to scale up well CGI would probably be kind of slow & hacky.
There are some C++ web frameworks out there, see this question.
As far as the OpenGL goes, you'll probably need to use the frame buffer extension as Jay said.
You could then render your image to a texture, and use use glGetTexImage() to grab the pixel data. From there just save into what ever image format you want with the accompanying library.
I had a similar project/question, that although it wasn't a web service, it required OpenGL rendering in a windows service. I had tons of problems getting it to work on Vista, although eventually it did work on XP with regular OpenGL.
I finally tried using Mesa, which I built to work as a private DLL for my service. It was a great decision because I could now actually step into the OpenGL calls and see where things were going wrong. It ran fine in software mode under the service, and while it wasn't hardware accelerated, it worked very well.
If your user-interaction needs are simple, I'd just look at CGI. It should be pretty easy to understand.
As far as getting the output of the OpenGL program, I'd take a look at the framebuffer extension. That should make it easy for you to render into memory, which could then be fed into a JPEG compressor.
I guess you already have access to some webserver, e.g. Apache or similar. In that case you could try out CppServ.
You just need to configure your webserver so it can communicate with CppServ. I'd then propose that you develop a servlet (check the documentation on the site) which in turn communicates with your already existing dll. Since the dll knows everything about OpenGL it should be no problem to create jpeg images etc.
How about using something like Flex to create a web-enabled 'control interface', with a server backend that streams the opengl rendering as video? Basically, you are redirecting keyboard/mouse input via the Flex app, and using it to display 'realtime' 3D activity using a standard movie component.
The devil's in the details, of course....
You can try to use O3D API from google and don't do anysort of service, it would be much more simple.
This answer might sound very basic and elementary, have you tried this approach
Send the vector data from the server to the client or vice versa (just co-ordinates and so on)
Render it on the client side.
This means that the server is only doing the calculations and the numbers are passed to and fro.
I agree that is not a trivial method of trying to vectorize every object/model and texture, but is very fast since instead of heavy graphical images going across, only vector data is being sent across.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
There are tons of GUI libraries for C/C++, but very few of them are based on the idea that opengl is a rather multiplatform graphics library. Is there any big disadvantage on using this OpenGL for building my own minimal GUI in a portable application?
Blender is doing that, and it seems that it works well for it.
EDIT: The point of my question is not about using an external library or making my own. My main concern is about the use of libraries that use opengl as backend. Agar, CEGUI or Blender's GUI for instance.
Thanks.
Here's an oddball one that bit a large physics experiment I worked on: because an OpenGL GUI bypasses some of the usual graphics abstraction layers, it may defeat remote viewing applications.
In the particular instance I'm thinking of we wanted to allow remote shift operations over VNC. Everything worked fine except for the one program (which we only needed about once per hour, but we really needed) that used an OpenGL interface. We had to delay until a remote version of the OpenGL interface could be prepared.
You're losing the native platforms capabilities for accessibility. For example on Windows most controls provide information to screen readers or other tools supporting accessibility impaired users.
Basically unless you have a real reason to do this, you shouldn't.
Re-inventing the Wheel: Yeah, you'd be doing it. But I note OP used the word "minimal" in the problem statement, so assuming it really doesn't need to scale up to all that, it may be a small enough wheel as to not matter. The product I currently work on supports OpenGL on three platforms (Win, Mac, Linux) and we built all our own widgets (text boxes, buttons, dialogs). It's a lot of work but now that we've done it we own a huge chunk of our stack and don't have to debug into third party frameworks when things don't work as expected. It's nice having complete control of the experience. There's always something you want to do that a framework doesn't support. Like everything in our business, it is a trade-off and you just have to weigh your needs against your need to finish on time.
Portability: Yes, you will still have to write platform specific code to boot-strap everything. This will be difficult if you've not done it before as it requires you to understand all the target platforms.
Windows Drivers: We've found that graphics card manufacturers have much better support for DirectX on Windows than OpenGL, since that's what is required to get MSFT certification. Often low- to mid-range cards have bugs, missing functionality, or outright crashes in their OpenGL support.
With Qt 4.5 you can select if you want to use OpenGL as window renderer.
More info:
https://www.qt.io/blog/2008/10/22/so-long-and-thanks-for-the-blit
Read the comments to know about the problems about this.
The obvious one is that you're basically building the GUI elements yourself, instead of having a nice designed like for wxWidgets or Qt.
You cant just use opengl, you need a platform specific code to set up a window for opengl.
From the freely available opengl red book:
OpenGL is designed as a streamlined,
hardware-independent interface to be
implemented on many different hardware
platforms. To achieve these qualities,
no commands for performing windowing
tasks or obtaining user input are
included in OpenGL; instead, you must
work through whatever windowing system
controls the particular hardware
you're using.
There are multiplatform solutions for this like glut, qt, wxWidgets, ...
And if you have to use them anyway, why not use built in GUI elements. They also give you the opportunity to build your own, and make use of the framework to handle mouse/keyboard events and stuff.
If you want a GUI as in windows/button etc. Don't do it yourself. There a lot of free solutions for this, wxwidgets,qt or GTK. All have OpenGL support if you want a 3d window