This question already has answers here:
Take screenshot of DirectX full-screen application
(7 answers)
Closed 9 years ago.
I am trying to write a tool to captrue screenshots. Anyway, to see how other tools work I have tried some other tools. Like nircmd.exe. I am automatically capturing screenshots once in a while. Problem is it doesn't capture the WoW window when it's active.
WoW runs as fullscreen. But look at the top left corner. That's all it will show. Instead it will show something else. I blocked out the unneccesary parts of the screen. For information: it showed skype that was not minimized. But... WoW was active. Any idea what the problem could be?
I will also be happy if someone points me at a tool that can solve my problem. This seems like a complicated problem to solve with my own code considering all the other tools fail as well.
Capturing fullscreen is not trivial. It's a own mode at device level. The OS does not have APIs for that. You have to "break into" the target process and hook functions of the graphics API the application uses.
Try FRAPS if you are not interested in programming it yourself.
Related
Essentially I'm trying to learn more about the Win32 api, how certain classes/elements are created, destroyed, what items make them up etc.. Dissecting windows if you will for a project of mine.
I'm very curious at the moment what popups/tool tips/hover effects ubiquities to all windows applications are made up of. My main goal is to grab text from any tooltip/hover thingy/WS_POPUP?
If someone knows that is great but I'd also like to have the tools to research it myself.
I'm not even sure what to google to be honest to get me on the right path. I've tried some C++ code to print class names and fetch the text from what I think might be a msgbox but no dice so far.
The MiniSpy tool on Codeproject comes in handy in situations like this because it uses the corner of the spy window as the location, not the mouse.
I would like to recognize objects of windows applications, mainly computer games. I would like to accomplish this by opening the window in OpenCV and applying all kinds of effects to the game application under execution and recognize objects such as UI elements, messages and even characters that are on the screen.
Since OpenCV only allows video and webcam as input, is there a way to open a running application as a source for OpenCV?
There maybe some testing applications used in game development that use similar methods for testing, but I couldn't find any.
I also don't know the right terms that that are used when discussing the recognition of virtual objects of a computer program or computer game, but that is exactly what I would like to do.
I tried to look up forums, articles or anything written about this, but found nothing. Any help would be appreciated.
I used OpenCV for a year and I am not sure if you can pass the running application to it.
As an alternative, you might like to have a function which gives you the current screenshot of the desktop. So you can get it and pass to OpenCV.
This way you can periodically do screenshots and use your recognition.
If you are working under Windows you might need this discussion.
Hope this will somehow help you.
I've been trying all method using desktop as source in opencv few months ago as my experiment project and i success, but there's a trail inside the windows, as maybe the speed or processor depends on it. It using your own function to use desktop as source not from opencv libraries. And from there i continued custom things and got stuck in some bug that has something from memory, as it use lots of memory. I did post on stackoverflow. Hope this information helps.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
As some suggested, I'm trying to minimize the question and get out of the web. That web example was not exactly what I wanted. I simply wanted to make a macro. Like if a condition is true open date settings and change the date to December 2000. If false, open time settings. How can I program to do this? Do I need to execute some commands in command prompt window to do it? If yes what are they? Actually, I want to make a macro that will do a lot of work in just on click.
as far as I know, eveything we do in computer or windows just executes a series of commands depending on the input of mouse and keyboard. So, I want to know if it is possible to know what commands are being executed in background. Actually, I want to make a macro in my program. As for example, a program that will go to google, search for a word and download the first imahe that comes from the search. If, it is possible to know then how.
[I could have searched in google. but I just couldn't guess what keywords to use in the search. so, I posted here. I am sorry if it has already been dicussed]
These click-related commands are really related to the GUI system. These 'commands' figure out what type of click, where on the screen it occurred, etc. After all that, some function to do the work will start executing, which is completely unique to the program, rather than the GUI.
Program that leverage other programs do not got through the GUI, and therefore can skip all of that. They instead use API's to directly access the functionality that is accessed by the sequence of events triggered by a mouse click.
The web is not my area of focus, so I do not have a reliable suggestion for how you would accomplish this, but the takeaway is this: Don't strive to understand the underlying technology by tracing mouse clicks. The GUI is an abstraction from what is really going on "under the hood," and is rarely an effective segue into the underlying mechanisms of a program.
Searching for "image search API" would be a good place to start.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Background: I built Qt for iOS in the usual way (by following the instructions here). I ran qmake in the "cube" example project directory, opened the output .xcodeproj file, and then ran it on my iPad 3. Instead of the expected cube I just get a black screen. I can tell from inserting qDebug() statements that the touch events are working, but there is no display, even though the Qt OpenGL API calls seem to all succeed. In the console there is only this:
QIOSBackingStore needs to have the same size as its window
I thought there was a chance this example would work since it's explicitly OpenGL ES 2.0-compatible but I guess there is some problem with the implementation. OpenGL ES 2.0 is too unwieldy and I'm not familiar enough with it to quickly diagnose the problem, unfortunately.
I've learned that Qt's OpenGL support is very good, so I would like to use it for iOS development if possible, rather than reinventing the wheel. Does anyone know what could be wrong with the example project? Failing that, does anyone have a Qt OpenGL project that works correctly on their iPhone or iPad?
Does anyone have an OpenGL ES 2.0 Qt 5 example working under iOS?
I have just had a quick chat with the developers and they claimed the cube example to work in the past at some point for their quick test. It may be that you are hitting a bug. I would suggest you to report this on the bugtracker against the QtPorts iOS component. Tor Arne will eventually look at it when he gets there.
Having that said, he "hellogl_es2" and "hellogl" examples have also worked for the developers, and they should also work. Again, if there is any bug, please report it as iOS is a fairly new platform for Qt to be supported, so smaller bugs may occur.
As for for giving some background about the history of the iOS port: the main focus has been Qt Quick 1 for the last minor series, Qt 5.1. There has not been a thorough development and testing for opengl (including raw C++ code).
Lately, the development branch in git got v4 support (new interpreter engine) for the declarative repository which is a way towards QtQuick2 and proper opengl support implicitly. Until that, it is hard to say if there is something reliably working.
The people behind the post have not had opengl that much on the radar for understandable reasons. This includes for instance QGLWidget from the Qt 4 era.
The only official examples provided at the Qt iOS team are available at:
1) https://itunes.apple.com/us/app/subattack/id659283830
2) https://itunes.apple.com/us/app/qtquicksand/id666273528
That being said, 5.2 and post releases will be improved, so it is worth watching.
So it turns out that the actual OpenGL ES 2.0 example working for iOS is called "hellogl_es2". The "cube" example I was trying has not yet been fixed to run on iOS. It is also OpenGL ES 2.0, but there is probably some problem with the implementation that I am not qualified to remedy at this time.
Anyway, if you want to see OpenGL ES 2.0 with QGLWidget running under iOS today, deploy the "hellogl" example. It is working fine on my iPad 3 at ~60 fps.
I'm interested to know if gtkmm w/ ATK (or whatever) works with MSAA like Qt does. We're looking right now at switching toolkit from WX and it turns out that our testing software relies on MSAA to do it's thing (something I wish I'd known 3 years ago when we picked WX to begin with). Of all the GUI toolkits, I prefer GTKmm mainly due to it's use of signals and slots but in a way much more expressive and generic than Qt's....and without the need for the extra build step that requires the Qt VS plugin. The designer is much better too.
So I'd like to use GTKmm but the only discussions and google stuff I can find on the topic are 3+ years old. They lead me to believe the answer is no, it doesn't support it and if it does it's really shoddy. But a lot can change in 3 years.
So, anyone that uses GTK or GTKmm on win32 know if it supports the windows accessibility framework?
I can’t say for sure but I would lean toward very little to no support. I use the Jaws screen reader SOFTWARE SINCE I’m blind. It uses MSAA quite heavily and GTK applications such as Pidgin are almost completely inaccessible. While I can read some of the text on a screen figuring out weather I’m in an edit field or weather a button is selected is impossible. If my screen reader can’t deal with GTK applications I assume your testing software will have major issues as well.
I don't know if this is still being tracked but I will second this assessment as another windows screen reader user. gnuCash was the app I tried and it was pretty rough going. Worked like a dream in Gnome with Orca though. Apparently, it's like this, if you want windows accessibility, use QT and WxWidgets. If you want Linux, use gtk+. QT is going to be accessible in Linux apparently though this is yet to be (not till gnome 3, I think). Pity you had to abandon WxWidgets. I personally like their widgets since it has those sizers which take much of the guess work out of placement of controls. Important when you can't see and you want to build a gui. Looks like about the only cross-platform accessibility solutions right now are Xul and SWT (Java, you know). Sad thing about WxWidgets is that this sort of thing was reported to them two years ago but nothing seems to have been done about it.
http://trac.wxwidgets.org/ticket/9785
I would be delighted to know that I am wrong about this. I doubt it, though.