In QtWebkit, how does a webpage's QNetworkAccessManager::createRequest() get invoked? - c++

I'm building a Browser application using the QtWebkit and QtNetwork modules.
Let's say that it's a requirement that each webpage only be able to access resources from only a specific folder, set aside specifically for it. In this scenario, each webpage would have some kind of ID to identify it which could be used to verify that it's accessing the correct folder.
The problem is that it's not clear how exactly the createRequest() method gets invoked. If it's a signal that's emitted or something then I would be able to intercept it and add a few parameters indicating webpage ID.
As such now the only option open to me is to create a separate QNetworkAccessManager for each QWebPage and overload the createRequest() function whereas I would really like to be able to share the QNetworkAccessManager across QWebPages.
Alternate solutions would be appreciated but generally I'm also really confused about how the createRequest() method is reached.
Reference :
QNetworkAccessManager::createRequest

It's not a big deal to have a separate access manager for each web page. You don't have any measurements to show it to be a problem, so in a true Don Quixote fashion, you're fighting windmills and imaginary enemies :)
The createRequest virtual method is called by the various non-virtual request methods: get, post and put. It's a good example of the non virtual interface (NVI) pattern.

Related

C++ Unable to get Async DragDrop functioning correctly

We currently have a Silverlight UI (which we are unable to change from at this stage) for our system, which has very limited drag drop capabilities. We are currently running out of browser with elevated trust. So in order to handle Silverlight's shortcomings I have created a c++ com library in order to handle Drag Drop events. This works perfectly well for incoming events from other applications, however I'm struggling to get the Drag operations, with our app as the source, working correctly. Most of the files to be dragged from the app will be virtual, which I have managed to get working however regardless of everything I've tried I have been unable to get the operation to be asynchronous, and the app locks up during the process.
I initially implemented only the IAsyncOperation (need backward compatibility to xp), which had no apparent affect. My DataObject is queried for the interface, gets the ref. A Call to GetAsyncMode is made, which returns VARIANT_TRUE, and a call to StartOperation is made. However all operations are done on the same thread (ui thread) and no async seems to be in affect.
I Subsequently tried implementing ICallFactory to return an AsyncIDataObject. Here explorer seems to check for the ICallFactory interface, calls the CreateCall on the call object and queries it to make sure it has the correct interfaces. Using the symbol servers I am able to see that it this occurs in the AsyncStubInvoke call stack. From here a call to StdStubBuffer_QueryInterface is made which is searching for the ICallFactory interface. This check fails and I unfortunately cannot see what object is being checked for this interface. After this fails the call seems to fall back to SyncStubInvoke after an operation not supported error (following from the Interface not supported error). All of this too seems to have no effect on the end result and the call is still apparently synchronous with the source app locking up.
My DragDrop class which exposes the com calls is CComMultiThreadModel. I have tried using my DataObject as a basic class not inheriting from CComObjectRootEx and a wrapper IDataObject class which is defined in the IDL as well and does inherit from CComObjectRootEx as is also CComMultiThreadModel. I have also tried having this class inherit from IDispatch as well as IUnknown.
Any feedback would be greatly appreciated.

Public functions become remotely accessible when implementing onCFCRequest()

SOME BACKGROUND:
I'm using onCFCRequest() to handle remote CFC calls separately from regular CFM page requests. This allows me to catch errors and set MIME types cleanly for all remote requests.
THE PROBLEM:
I accidentally set some of my remote CFC functions to public access instead of remote and realized that they were still working when called remotely.
As you can see below, my implementation of onCFCRequest() has created a gaping security hole into my entire application, where an HTTP request could be used to invoke any public method on any HTTP-accessible CFC.
REPRO CODE:
In Application.cfc:
public any function onCFCRequest(string cfc, string method, struct args){
cfc = createObject('component', cfc);
return evaluate('cfc.#method#(argumentCollection=args)');
}
In a CFC called remotely:
public any function publicFunction(){
return 'Public function called remotely!';
}
QUESTION:
I know I could check the meta data for the component before invoking the method to verify it allows remote access, but are there other ways I could approach this problem?
onCfcRequest() doesn't really create the security hole, you create the security hole by blindly running the method without checking to see if it's appropriate to do so first, I'm afraid ;-)
(NB: I've fallen foul of exactly the same thing, so I'm not having a go # you ;-)
So - yeah - you do need to check the metadata before running the method. That check is one of the things that CF passes back to you to manage in its stead when you use this handler, and has been explicitly implemented as such (see 3039293).
I've written up a description of the issue and the solution on my blog. As observed in a comment below I use some code in there - invoke() - that will only work on CF10+, but the general technique remains the same.

C++ Output: GUI or Console?

I am making an application in C++ that runs a simulation for a health club. The user simply enters the simulation data at the beginning (3 integer values) and presses run. After that there is no user input - so very simple.
After starting the simulation a lot of the logic is deep down in lower classes but a lot of them need to print simple output messages to the UI. Returning a message is not possible as objects need to print to the UI but keep working.
I was going to pass a reference to the UI object to all classes that need it but I end up passing it around quite a lot - there must be a better way.
What I really need is something that can make calling the UI's printOutput(string) function as easy (or not much more difficult) than cout << string;
The UI also has a displayConnectionStatus(bool[] connections) method.
Bear in mind the UI inherits an abstract 'UserInterface' class so simple console UIs and GUIs can be changed in and out easily.
How do you suggest I implement this link to the UI?
If I were to use a global function, how can I redirect it to call methods of the UserInterface implementation that I selected to use?
Don't be afraid of globals.
Global objects hurt encapsulation, but for a targeted solution with no concern for immediate reusability, globals are just fine.
Expose a global object that processes events from your simulation. You can then choose to print the events, send them by e-mail, render them with OpenGL or whatever you fancy. Make a uniform interface that catches what happens inside the simulation via report callbacks, and then you can subclass this global object to suit your needs.
If the object wasn't global, you'd be passing a pointer around all the codebase.
I would suggest to go for logging framework i.e. your own class LogMessages, which got functions which get data and log the data, it can be to a UI, file, over network or anything.
And each class which needs logging, can use your logging class.
This way you can avoid globals and a generic solution , also have a look at http://www.pantheios.org/ which is open source C/C++ Diagnostic Logging API library, you may use that also...

How to de-GUI a complex tanglewad C++/Qt4 app?

We have a large messy app written in C++ and Qt4, many library dependencies, hundreds of classes and no coherent structure. It normally runs as a GUI app manipulated interactively, but sometimes it's launched in a hands-off way from another program that feeds it command line options and communicates with it by dbus. The GUI still shows, but no human or trained monkey is there to click anything. "Relaxen und watch das blinkenlights" Whether interactively or automatically, when run the app writes image files.
My job the next few weeks is to add an "no gui" feature, such that the app can run in the hands-off way and write its image files, without ever showing its GUI. Internally, the images to be written are made using QImage and other non-GUI Qt objects, but these are possessed by other objects that do involve the GUI classes of Qt. After several attempts to understand the mess, I cannot find a way to disentangle things such as to have the app create images w/o the whole full-blown GUI running. At one time, I was hoping I could just set xxx.visible= false; for all xxx that are GUI objects, but this is not practical or possible (AFIK).
Are there any general strategies to follow to add this no-gui feature to this app? Some technique that won't require deep redesign of the class hierarchy?
The long and hard way is finding out what and how the logic is executed, extract the logic to some QObject based classes (with signals and slots) and make it a QtCore app. I know this doesn't help, but that's the correct way.
If setting all GUI elements to hidden (or perhaps only the QMainWindow?) is not an option, this is the only thing you can do.
Qt allows you to do this, but if the original coder did not plan this in, you've got a lot of refactoring/recoding to do.
This really depends on how the program is written. If the logic is somewhat seperated from the interface, then it could be as simple as finding out what class inherits from the QMainWindow class and making sure that is never initialised.
In the case where the logic is all over the place, I'd strongly suggest trying to get all the logic into a form of signal or slot (which has probably already happened considering that it's a GUI app), then just not initialising the QMainWindow instance and calling these manually.
Try to subclass interface classes (QMainWindow, QDialog, etc), and implement your logic there.

Keeping the GUI separate

I have a program that (amongst other things) has a command line interface that lets the user enter strings, which will then be sent over the network. The problem is that I'm not sure how to connect the events, which are generated deep inside the GUI, to the network interface. Suppose for instance that my GUI class hierarchy looks like this:
GUI -> MainWindow -> CommandLineInterface -> EntryField
Each GUI object holds some other GUI objects and everything is private. Now the entryField object generates an event/signal that a message has been entered. At the moment I'm passing the signal up the class hierarchy so the CLI class would look something like this:
public:
sig::csignal<void, string> msgEntered;
And in the c'tor:
entryField.msgEntered.connect(sigc::mem_fun(this, &CLI::passUp));
The passUp function just emits the signal again for the owning class (MainWindow) to connect to until I can finally do this in the main loop:
gui.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
Now this seems like a real bad solution. Every time I add something to the GUI I have to wire it up all through the class hierarchy. I do see several ways around this. I could make all objects public, which would allow me to just do this in the main loop:
gui.mainWindow.cli.entryField.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
But that would go against the idea of encapsulation. I could also pass a reference to the network interface all over the GUI, but I would like to keep the GUI code as seperate as possible.
It feels like I'm missing something essential here. Is there a clean way to do this?
Note: I'm using GTK+/gtkmm/LibSigC++, but I'm not tagging it as such because I've had pretty much the same problem with Qt. It's really a general question.
The root problem is that you're treating the GUI like its a monolithic application, only the gui is connected to the rest of the logic via a bigger wire than usual.
You need to re-think the way the GUI interacts with the back-end server. Generally this means your GUI becomes a stand-alone application that does almost nothing and talks to the server without any direct coupling between the internals of the GUI (ie your signals and events) and the server's processing logic. ie, when you click a button you may want it to perform some action, in which case you need to call the server, but nearly all the other events need to only change the state inside the GUI and do nothing to the server - not until you're ready, or the user wants some response, or you have enough idle time to make the calls in the background.
The trick is to define an interface for the server totally independently of the GUI. You should be able to change GUIs later without modifying the server at all.
This means you will not be able to have the events sent automatically, you'll need to wire them up manually.
Try the Observer design pattern. Link includes sample code as of now.
The essential thing you are missing is that you can pass a reference without violating encapsulation if that reference is cast as an interface (abstract class) which your object implements.
Short of having some global pub/sub hub, you aren't going to get away from passing something up or down the hierarchy. Even if you abstract the listener to a generic interface or a controller, you still have to attach the controller to the UI event somehow.
With a pub/sub hub you add another layer of indirection, but there's still a duplication - the entryField still says 'publish message ready event' and the listener/controller/network interface says 'listen for message ready event', so there's a common event ID that both sides need to know about, and if you're not going to hard-code that in two places then it needs to be passed into both files (though as global it's not passed as an argument; which in itself isn't any great advantage).
I've used all four approaches - direct coupling, controller, listener and pub-sub - and in each successor you loosen the coupling a bit, but you don't ever get away from having some duplication, even if it's only the id of the published event.
It really comes down to variance. If you find you need to switch to a different implementation of the interface, then abstracting the concrete interface as a controller is worthwhile. If you find you need to have other logic observing the state, change it to an observer. If you need to decouple it between processes, or want to plug into a more general architecture, pub/sub can work, but it introduces a form of global state, and isn't as amenable to compile-time checking.
But if you don't need to vary the parts of the system independently it's probably not worth worrying about.
As this is a general question I’ll try to answer it even though I’m “only” a Java programmer. :)
I prefer to use interfaces (abstract classes or whatever the corresponding mechanism is in C++) on both sides of my programs. On one side there is the program core that contains the business logic. It can generate events that e.g. GUI classes can receive, e.g. (for your example) “stringReceived.” The core on the other hand implements a “UI listener” interface which contains methods like “stringEntered”.
This way the UI is completely decoupled from the business logic. By implementing the appropriate interfaces you can even introduce a network layer between your core and your UI.
[Edit] In the starter class for my applications there is almost always this kind of code:
Core core = new Core(); /* Core implements GUIListener */
GUI gui = new GUI(); /* GUI implements CoreListener */
core.addCoreListener(gui);
gui.addGUIListener(core);
[/Edit]
You can decouple ANY GUI and communicate easily with messages using templatious virtual packs. Check out this project also.
In my opinion, the CLI should be independant from GUI. In a MVC architecture, it should play the role of model.
I would put a controller which manages both EntryField and CLI: each time EntryField changes, CLI gets invoqued, all of this is managed by the controller.