What is a native window in Chromium? - c++

In How Chromium Displays Web Pages, it says,
The select boxes must be rendered using a native window so that they can appear above everything else, and pop out of the frame if necessary.
What is a native window in this context? Conversely, what are non native windows?
Some of the technical information in How Chromium Displays Web Pages is outdated, yes. I don't see any reason that would matter in this context, though.

Related

How to make a Native UI window in C++ with HTML/CSS UI in the client area

I have this program, it is apparently coded in C++, and I can see that the window(dialog box) is a native one, but in the middle of the window, it has modern looking UI elements, and when I right click on the client area(with modern UI elements) it shows a context menu like a web browser does(with almost same items as Internet Explorer).
There is also a newer version of this program, apparently it has coded same as before but the content in the web browser like area is now coded in Silverlight.
So according to my understanding this is a just a native window with an HTML web page in the client area, which allows to take advantage of CSS designing.
I would love to know how such a program be developed C++ and how does event handling is done in such a system.
Any help is much appreciated.
Thanks
Basically you do this by embedding a web browser control into your application. Microsoft directly provides such a control or you can use 3rd party alternatives.
In WinForms you can add a WebBrowser control like any other common
control, see here. You can also include the same control in a
WPF application.
The above .Net WebBrowser control emulates IE7 by default (I believe) even if
the user has a newer version of IE installed. You can make it
use a newer version as shown in various online resources. A better option is however, to not use IE or the
WinForms WebBrowser control at all. Consider CefSharp. "Cef"
stands for Chromium embedded framework, which is a native library. CefSharp is a C# wrapper
around CEF.
If you are not using C#, in a native Win32 application you can just
embed CEF. Or you can embed Microsoft's IE-based control
as, essentially, a COM object, but I would recommend the
former in this case. This is exactly what CEF is for.

What desktop does Metro stuff run in?

Just curious, from a standpoint of WinAPI developer, what desktop do Metro apps run in?
This stuff:
I didn't know that it would be such a secret... so I had to do some investigating and here's what I found:
First off, to answer my original question -- the Metro (or Modern UI) stuff runs in the exact same desktop as the "desktop" apps (pardon the pun.) It is actually all very simple. The short answer -- all Microsoft approved Metro stuff runs in the Internet Explorer_Server container (which, in layman's terms, is the Internet Explorer); or in the DirectUIHWND container (which is Microsoft's proprietary class that renders their undocumented UI), all in windows with the WS_EX_TOPMOST style turned on, which makes them render on top of the other content. And that is it!
Here's a couple of examples:
Let's split the desktop and use Spy++ to see what's happening under the hood:
So if we look into the "Weather" app window, it is nothing more than the regular (Win32) window of the class "Internet Explorer_Server" that is housed in the window of the "Web Platform Embedding" class, which in turn sits in the "Windows.UI.Core.CoreWindow" container that has the WS_EX_TOPMOST and WS_EX_NOREDIRECTIONBITMAP styles on:
If you look even deeper, all Microsoft's Metro stuff seems to run from the WWAHOST.exe process, which in simple terms is the container to run JavaScript for the Metro apps.
Now let's look into the Start Screen itself. Since it completely covers the desktop we need to use a different tool and its Shift key snapshot capability to get to it:
From it we can get the Start Screen's window handle (or 0x10158 in my case) and look it up in the Spy++:
As you can see from both tools, the Start Screen has window class DirectUIHWND, that is housed inside a window of the ImmersiveLauncher class, that is the one with the WS_EX_TOPMOST and WS_EX_NOREDIRECTIONBITMAP styles that make it remain on top. And that is the only difference between it and any other window created by a "desktop" app.
What is also interesting is how the "desktop" itself is rendered in case of a split-window situation. I originally assumed that in this case the desktop is simply shifted (or moved) to one side and resized, but that is not what happens... In reality (or in my Windows 8.1) in case of a split between a desktop and a Metro app, the metro app simply covers over the desktop, but the desktop itself does not change its position or size. In that case, only the taskbar and the existing desktop windows are moved and resized to fit the split. This could be illustrated by this diagram:
As a side note, such moving and resizing can be quite annoying for a user, since original positions and sizes of the desktop windows are not restored when the split goes away.
And lastly, a somewhat unexpected finding. I decided to check how Google folks were able to implement their Chrome browser (running as a Metro app) and found this:
Chrome renders in the Windows.UI.Core.CoreWindow class window, belonging to the Google's own process: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe". So without going any deeper, it is evidently possible to encapsulate a Metro-style app in a non-Microsoft container, which is a good news for developers that don't care about AppStore XAML apps :)
EDIT: Forgot to mention, if you plan to show your own popup message from a Win32 process that is visible on top of a Metro app, you need to do the following:
Set UIAccess="true" in the process manifest. You can do so in the Visual Studio by going to Project Properties -> Linker -> Manifest Files and set UAC Bypass UI Protection to YES. (Note that you can keep UAC Execution Level as asInvoker, or not to require elevation of your process.)
Code-sign your process. It is important, since without a signature it won't work, and you'll see this error message: "A referral was returned from the server."
An alternative to signing (or for testing purposes on your development system) you can set the following registry key to 0. (I haven't tried it though, and I wouldn't recommend it due to obvious security concerns! But it seems to be another way to test it if a code-signing certificate is not available.)
HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System\ValidateAdminCodeSignatures
Place the compiled executable file either into the %windir%\System32 folder, or more realistically into the %ProgramFiles%\Company\Product, or into the alternative %ProgramFiles(X86)%\Company\Product folder for your product's installation location.
Also you may consider reading Raymond Chen's article about this topic.
After that, when you set the WS_EX_TOPMOST style on your popup window, it will be displayed above any other windows, including the Metro apps, Start Screen, etc.
So in other words, doing this:
//You may also consider setting the WS_EX_NOACTIVATE style
::SetWindowPos(hWnd, HWND_TOPMOST, 0, 0 , 0, 0, SWP_NOMOVE | SWP_NOSIZE);
Can achieve this:
Since you say that your actual issue is knowing whether a Metro app is running, the answer is to call IAppVisibility::GetAppVisibilityOnMonitor. Pass the monitor you want to check. Note that this will give the correct answer regardless of what desktop the applications run in.

Can you use Win32 GUI in a browser plugin?

Of course it would mean you're plugin is not cross-platform but let's focus on the technical side...
Is a browser plugin (like done in NPAPI) restricted in what it can do? Or do you get fairly free reign to access the PC and the render-window you're given? For instance can you create Win32/MFC controls in your browser this way?
A side question - is your browser plugin conceptually akin to a .DLL, which is therefore just arbitrary compiled code implementing a specific interface for browser control/communication?
There are 2 types of NPAPI plugins: windowed and windowless plugins. Both of them has some advantages and disadvantages (see this link). When you deal with windowed plugin on Win32 you get HWND of browser plugin window and you can work with it like with any window in OS.

Qt question to fullscreen flash application

I am using Qt to develop an application and inside we have access to select flash streaming videos like youtube. Is there a way to programmaticly full screen the flash application without requiring interaction from the user?
I am using a "QWebView" control.
try calling showFullScreen for the window where your QWebView control is hosted.
void QWidget::showFullScreen ()
Shows the widget in full-screen
mode.
Calling this function only affects
windows.
To return from full-screen mode, call
showNormal().
I would say: locate the button for the fullscreen application on the page, and send a click using QEVent. Tricky, but might work.
If the button is inside the flash application, you will have difficulties to locate it but if you succeed, you can probably send the click to the flash application area.
You can always inject javascript from Qt into your QWebPage. If there is a javascript API for forcing the flash viewer to full screen, I do not know.

How do I get the window handle of the desktop?

The Windows API provides an API GetDesktopWindow( ) which returns the window handle
But I tested with Spy++ and I find that the window handle of the desktop and the window handle of the "Windows Desktop" is not the same.
As the "Windows Desktop" is a list view, do I need to do the following
1) HANDLE hWnd = GetDesktopWindow() ;
2) FindWindow(hWnd, ..... ) with the SyslistView32 as the Window class.
Once I get the Window handle, I want to use SendMessage() for operations like getting selected file name, the number of files selected , etc.
Please give your opinions. I am doing this using the Windows SDk
In light of a recent discussion on Meta complaining that questions like this one have "not been properly answered", I'm going to try and give answering this one a whirl. Not to imply that I think meklarian's answer is bad—in fact, far from it. But it's clearly been deemed unsatisfactory, so perhaps I can fill in some of the additional details.
Your problem results from a fairly widespread confusion over what the desktop window actually is. The GetDesktopWindow function does precisely what it's documented to do: it returns a handle to the desktop window. This, however, is not the same window that contains the desktop icons. That's a completely different window that appeared for the first time in Windows 95. It's actually a ListView control set to the "Large Icons" view, with the actual desktop window as its parent.
Raymond Chen, a developer on the Windows Shell team provides some additional detail in the following Windows Confidential article: Leftovers from Windows 3.0
[ . . . ] While in Windows 3.0, icons on the desktop represented minimized windows, in Windows 95, the desktop acted as an icon container.
The Windows 95 desktop was actually a window created by Explorer that covered your screen (but sat beneath all the other windows on your desktop). That was the window that displayed your icons. There was still a window manager desktop window beneath that (the window you get if you call Get­Desktop­Window), but you never saw it because it was covered by the Windows 95 desktop—the same way that the wood paneling in the basement of my colleague’s house covered the original wall and the time capsule behind the wall.
[ . . . ]
This desktop design has remained largely unchanged since its introduction in Windows 95. On a typical machine, the original desktop is still there, but it’s completely covered by the Explorer desktop.
In summary, then, the window returned by the GetDesktopWindow function is the actual desktop window, the only one we had way back in Windows 3.0. The Explorer desktop (the one that contains all your icons) is merely another window sitting on top of the desktop window (although one that completely covers the original) that wasn't added until Windows 95.
If you want to get a handle to the Explorer desktop window, you need to do some additional work beyond simply calling the GetDesktopWindow function. In particular, you need to traverse the child windows of the actual desktop window to find the one that Explorer uses to display icons. Do this by calling the FindWindowEx function to get each window in the hierarchy until you get to the one that you want. It has a class name of SysListView32. You'll also probably want to use the GetShellWindow function, which returns a handle to the Shell's desktop window, to help get you started.
The code might look like this (warning: this code is untested, and I don't recommend using it anyway!):
HWND hShellWnd = GetShellWindow();
HWND hDefView = FindWindowEx(hShellWnd, NULL, _T("SHELLDLL_DefView"), NULL);
HWND folderView = FindWindowEx(hDefView, NULL, _T("SysListView32"), NULL);
return folderView;
I noted there that I don't actually recommend using that code. Why not? Because in almost every case that you want to get a handle to the desktop window (either the actual desktop window, or the Explorer desktop), you're doing something wrong.
This isn't how you're supposed to interact with the desktop window. In fact, you're not really supposed to interact with it at all! Remember how you learned when you were a child that you're not supposed to play with things that belong to other people without their permission? Well, the desktop belongs to Windows (more specifically, to the Shell), and it hasn't given you permission to play with its toys! And like any good child, the Shell is subject to throwing a fit when you try to play with its toys without asking.
The same Raymond Chen has published another article on his blog that details a very specific case, entitled What's so special about the desktop window?
Beyond the example he gives, this is fundamentally not the way to do UI automation. It's simply too fragile, too problematic, and too subject to breaking on future versions of Windows. Instead, define what it is that you're actually trying to accomplish, and then search for the function that enables you to do that.
If such a function does not exist, the lesson to be learned is not that Microsoft simply wants to make life harder for developers. But rather that you aren't supposed to be doing that in the first place.
If you want the Desktop window as defined in GetDesktopWindow(), use that window handle. This is the window handle you should use to look for top-level windows and other related activities.
What you're seeing in Spy++ is just the content drawn as the desktop in your session. If you use the auto-locate in Spy++, you'll see that the SysListView32-declared window is a child window of your explorer shell. It is quite infrequent for someone to need access to this window. Also, the existence of this window may be subject to changes between versions of windows.
Edit (additional info)
If you are looking to interact or place things on the actual shell desktop, you may be better served by other APIs. Here are two such APIs that can accomplish this, depending on the target version of windows.
Windows Sidebar # MSDN
This is available on Vista and Windows 7
Using the Active Desktop # MSDN
This is available on Windows 2000 and XP, although frequently disabled by users and sysadmins.