Limits SetSysColor to one application - c++

I want to change the set of colors used in my software. My research led me to the SetSysColor() function. The problem with this function is that every software on the computer gets affected by the modification, not only the executable I want to modify.
Is there a way, or a alternative, to change the set of colors used by my application without having to redraw everything manually?

In some cases using custom colors may indeed be beneficial (highlighting, notifications, color-coding etc); just don't overdo. There are certain ways in WinAPI (usually accessible in MFC too - through events or direct WIndows message handlers) that let customize the appearance of your application. Look into the documentation for own-drawn controls, custom control colors, WIndows Class brushes, messages like WM_CTLCOLOR and WM_ERASEBKGND, You can also customize the looks of a window's non-client area (eg title-bar, borders etc), although this changes the appearance of your app a lot more drastically. See messages like NC_PAINT and NC_HITTEST.

Related

C++ win32 API Create multiple windows like viewports

I am trying to build a Level Editor for my engine and I wondered how I can achieve multiple viewport windows in one window, like in Blender, Cinema 4D or Unity, where you have your rendering viewport, scene hierarchy, properties window etc.
Does the win32 API have a function to create these viewport windows or do I have to create another instance with CreateWindowW with no title bar?
You could conceivably do this with a single window but this sort of thing is usually much easier achieved using a child window ((yes, created via CreateWindow(Ex)?) for each view and then a parent window that handles positioning those child windows (that is, a spliter type frame).
You may even end up with a window tree that is separate from the level view for a properties list etc.
It is simply much easier for the child windows to only need handle one thing (show an overhead level view, show a 3D projection etc) than to make one window class that does all of these.
There is no native notion of a "viewport" in Win32.
To support this kind of functionality at all, to create even a single viewport, you will need to know how to create a custom control. In Win32 "custom controls" are really just custom child windows. Say you have a custom child window class called "view" that handles rendering using a 3D library in its WM_PAINT handler, etc., then to support multiple viewports you fundamentally have two options:
Make "view" implement the functionality itself. Multiple viewports would not be separate Win32 windows. There would be one Win32 child control painted to look as though it was multiple windows. You would then need to handle all the internal UI interactions you offer the user 100% yourself. Dragging the view splitter bar, etc. The benefit would be that you could then make those interactions however you want, possibly totally nonstandard, and also performance while dragging and performing other interactions would probably be better than the alternative.
Use separate "view" child windows for each viewport. Handle UI interactions via other custom child controls, possibly, e.g. a view splitter control, etc.
Without more focus to the question that is about as much of an answer as can be given. The key thing to understand is that Win32 is a powerful but low-level API. If you are looking for an application framework that gives you a lot of functionality for free you should look somewhere else.

wxWidgets Overlay Text (C++)

I am trying to place some overlay text over a wxPanel.
I have several panels with completely different Content but I want to place this overlay text over all of these panels in the top right Corner of the panel.
I am restricted to wxWidgets 2.8.12..
Do you see any way to achieve this behaviour?
Edit:
Here a bit more detailed Version of what I am trying to do:
I have a Layout that consists of e.g. 5 containers and each container can contain a module. A module can be either wxPanels that contain Plain text or Input controls or for example a OpenGL canvas or an Image or something else.
Because I have much content and it does not fit on a single page I want to make the modules inside a Container exchangeable. It would be also nice if the user is able to perform this action only by using its keyboard. E.g. if he presses the key "3" the content of the third container has to be switched.
To handle these shortcuts isn't a problem. However I need to signalize to the user the identifier / hotkey of the containers.
I could do this by placing a additional headline above each container, but I want to waste as little space as possible on the gui.
I also could draw directly to the modules content, but I would have to do this for every module and every module is designed in a different way (images, multi column, opengl, ...) and maybe even by different persons.
So I am looking for a simple solution to indicate the number of these containers that does not consume that much space.
Thanks for your help
You can use a wxWindowDC to draw anywhere on the window, even on child windows. However anything you draw will be painted over whenever the windows or controls repaint themselves. You can draw your overlay in an UpdateUI event handler to minimize this. I have used this approach with success on Windows with wxWidgets 2.8.12. Not sure if it works with OpenGL though.

without much change set MFC application to maximum resolution

I have an MFC application which is used frequently and works well, it has become an important part of client work and downtime is not at all tolerable.
Problem is that there is change in hardware and old monitors are replaced by LCD monitors so MFC application which is not fitting into size of all monitors, is there a way where in I can simply change MFC settings and recompile without much code change as that will trigger a lengthy test-fix-test cycle.
I would be happy to use a third party tools which would act as an container to this MFC application needing fixed resolution and give me scroll-bars like virtual monitor
Thanks
Two ideas without code change:
1) Just set the resolution of the desktop to another value, such that it's the same as on older monitors.
2) Change the font size in the dialog resources. This will change the size of the whole dialog.
With code change:
1) Use CDC::SetWorldTransform(const XFORM& rXform) to scale the CDC before you paint onto the CDC.
2) Use CDC::SetViewportExt(..), enable the scrollbars in CreateWindow(..), and handle the scroll events by using CDC::SetViewportOrg(..) to move the content of the window.

Adding a user interface to an image viewer plugin

I have a general question on how to develop an image viewer plugin with Firebreath. For that, I would like to incorporate a GUI framework, like wxwidget or Qt. The GUI would be used to to fire up some dialogs, adding a toolbar on top, or to open context menus with right clicking an image.
As far as I understand I have a hwnd handle and so I can draw onto a window. I also understand that I have various events I can react on, like mouse button clicks or keyboard strokes. But it fails me how I would add graphical menus, buttons, etc. I know I could use html around the window but that's not the route I like to take.
For instance, does it makes sense to render an user interface offline (in memory) onto an image and then keep somehow track of the state internally?
Has anyone done such thing? Or can anyone give me some insight on how to accomplish adding a user interface.
Assuming you only care about windows and assuming that you don't mind using a windowed plugin, which is the easiest (but no HTML elements can float over the plugin), it should be no different than creating a GUI in any other windows application.
You are given a window that shows up with the AttachedEvent; when DetachedEvent is fired you need to stop using the window. Many people create a child window inside that parent window and use that for all their actual real code which makes it a little easier to use one of those other abstractions, but that's basically all there is to it. I don't know specifically how you'd do it with QT or wxwidget but you'd create a child window of that HWND that you are given and have the abstraction do your thing for you.
As to whether or not it would be rendering things offscreen, etc, I have no idea; that would totally depend on the window system. There is no reason that I know of that you would need to do that, and most things just draw directly to the HWND, but there are a zillion different ways you could do it. It looks to me like what you really need is to understand how drawing in Windows actually works.
I hope that helps

Simulate fullscreen

I've seen an application that simulates a fullscreen application by removing the title bar and the window borders. I've done some research and found getWindowLongPtr() for that.
Now my question: How can I find and identify the application and get the appropriate window handle? How can I distinguish multiple instances of the application (running from different locations on disc)?
Just to make "simulate" more precise. If you make an application go fullscreen and you click on a different monitor, it minimizes itself. If the application runs in a window and you click on a different monitor, the window is not changed. If you remove the borders of the window and position it on the left or right monitor, you can still work with the other monitor without minimizing the application. Still it looks like the application running fullscreen on one of the monitors.
As an example: you can set Eve (www.eveonline.com) to fullscreen and windowed mode. In fullscreenmode you can not click on a second monitor without Eve minimizing itself. In window mode you can. There are tools like evemover that allow you to setup your window on one monitor, looking like fullscreen, but being in window mode. That's what I want to archieve. Evemover actually provides some of it's source code, that's why I know that removing the border and setting the position is done using the Win32-API with setWindowLongPtr and setWindowPos.
Many applications use divergent and confusing applications of the phrase "fullscreen".
A fullscreen application simply - occupies the full screen area.
DirectX applications can request a fullscreen exclusive mode. The advantage of this mode to DirectX applications is, with exclusive access to the (full) screen they are then allowed to change the resolution, bit depth etc, as well as gain access to vertical sync synchronized hardware buffering where the screen surface is 'flipped' between display intervals so that 'tearing' does not occur.
Anyway, the windows desktop understands 'fullscreen windows' - windows that occupy the full area of a monitor and have no non client elements. When windows like this are created, things like desktop gadgets and task bars automatically hide themselves. Modern games have come to call this mode 'fullscreen windowed'.
Back to your question: 'FindWindow' is the API used to discover other applications windows. Getting the path to the application that created the window is much harder. GetWindowThreadProcessId can get you the process id of the owning process. OpenProcess will get you a handle that you can pass to QueryFullProcessImageName (implemented on Vista and above) to get the full path to the process.
I think you are refering to applications like window aggregators, that 'plug in' to the system and act from outside the application.
Look at the code for the freeware app PuttyCM (for aggregating Putty (SSH) shell windows as tabs). IIRC, it ensures that the Window pointer passed to the application has the flags already set.
On applications running from different places, you will probably need some way of identifying it - registry entries / install log etc.