I am writing a program to interact with a network-based API using Qt.
The interaction with the API is made with XML messages (both queries and results)
I implemented the communication and data processing in a class in a shared library project and I have a QMainWindow in which the user can enter the connection details. When clicking on the connect button, the following should happen:
1. An instance of the connecting class is created.
2. A connection message is sent to the API to get the session ID. The answer is parsed and a session ID is stored in the class instance.
3. A message is sent to the API the get some field information. The XML is then parsed to extract the required field information to get data from the API.
4. Another message is sent to get the data matching the fields. The XML answer is then parsed and stored in a data structure for processing.
5. The data is then processed and finally displayed to the user.
I made a simple console program to test the library and it is working fine - no message is sent before all the data from the previous message has been processed. However, when I implement the same process in a QMainWindow instance, no wait occurs and messages are sent one after another without waiting.
How can I block the GUI thread to wait for full processing before sending the next message?
Thanks
Blocking the UI isn't achieved by blocking the event loop. It's done by disabling the widgets that you don't want to allow interaction with - either by literally calling disable() method on them, or by guarding the interaction based on some state variable, e.g.:
connect(button, &QPushButton::clicked, button, [this]{
if (! hasAllData) return;
// react to a button press
});
All you need is to define a set of states your application can be in, and disable relevant widgets in appropriate states. I presume that once the session is established, it'd be fastest to issue all queries in parallel anyway, and asynchronously update the UI as the replies come back in real time.
Related
I have created a simple C++ TCP Server application.
Client connects and receives back as a simple echo everything that the client sends to the server. No purpose at all except for me to test the communication.
So far so good. What comes as next task for me is to decide a way of how to send a notification to the server that specific event has started.
Some event examples:
Player wrote a message - Server accepts the data sent from the client and recognizes that it's a chat message and sends back data to all connected clients that there is new message. Clients recognize that there is new message incoming.
Player is casting spell.
Player has died
Many more examples but you get the main idea.
I was thinking of sending all the data in json format and there all messages will contain identifiers like
0x01 is message event.
0x02 is casting spell event.
0x03 is player dead event.
And once identifier is send server can recognize what event the client is asking/informing and can apply the needed logic behind.
My question is isn't there a better approach to identify for what event the server is notified ?
I am in a search of better approach before I take this road.
You can take a look at standard iso8583 message, it's a financial message but every message has a processing code that determine what action should be done for each incoming message.
I am confuse about both communication for the given scenario.I feel that every single list item can be synchronous communication.
Order service calling the shipping service to proceed for shipment.
User buying items from User Interface(UI) Service resulting in
invocation of Order Service.
User Interface(UI) service calling catalog service to get information
about all of the items that it needs to render.
All three examples would be considered asynchronous as they prompt a response due to cause and effect - call and respond. While all three of these could happen concurrently, each in and of themselves is not synchronous.
Synchronous communication happens simultaneously, like two people editing the same document online. Each editor reads and writes at the same time, but does not interrupt the other in any way.
The best example of synchronous communication is a telephone conversation. All connected parties can hear (receive) & speak (transmit) at the same time, and although humans have difficulting performing both actions simultaneously, the telephone connection itself has no trouble providing both concurrently.
Asynchronous acts like a two-way radio. You must stop transmitting in order to receive.
Synchronous = in synch
Sender wait for a response from the receiver to continue further.
Both Sender and Receiver should be in active state.
Sender send data to receiver as it requires an immediate response to continue processing.
When you execute something synchronously, you wait for it to finish before moving on to another task.
Asynchronous = out of synch
Sender does not wait for a response from the receiver
Receiver can be inactive.
Once Receiver is active, it will receive and process.
Sender puts data in message queue and does not require an immediate response to continue processing.
When you execute something asynchronously, you can move on to another task before it finishes.
In your case,
Catalog Service <-- UI --> Order Service --> Shipment service
1) UI has to fetch item details from Catalog Service (Synchronous because it needs item immedietly)
2) Once all items selected, UI has to invoke Order service.(synchronous / asynchronous, depends upon user action)
User might add in shopping cart for future use (or) in favourites (or) to immediate process order.
3) Once all items exist in shopping cart collection , it has to invoke shipmentService. (asynchronous)
Payment should be synchronous. You need acknowledgement.
Assuming all payment and other stuff done, it calls shipment delivery service
Delivery is asynchronous because it cant get acknowledge immedietly. It may take 2 days delay etc.
We have a remote event receiver associated to a list and hooked on all events there. When you update any list item using OOB SharePoint page, the event receiver is executed; a web service which is taking care of the afterward actions works nicely. However when you update item use CSOM code e.g. in simple console application, nothing happens. The event receiver is not called at all. I found this issue on both SP 2013 and 2016.
I will not post any code while it is irrelevant: item is updated using standard approach and values are actually changed in the list item, only the event receiver is not fired. To put it simply:
item updated manually from site -> event receiver fired
item updated via CSOM -> event receiver not fired.
I remember similar issue on SharePoint 2010 when using server side code and system account. Could it be that behind the scene web service called by CSOM (e.g. list.asmx) is using system account to make changes as well? It's just hypothesis...
So after deeper investigation and many try/fails we found out it was indeed issue with code in our event receiver. For some strange reason original developers were checking Title field in after properties and cancelling code if not present. I guess it was probably an attempt to prevent looping calls.
One lesson learned: When using CSOM after event properties contains only those fields which were altered by CSOM code. Keep it in a mind in case you need to use other values than those you want to update. You may need to stupidly copy and assign them again just because of this.
I have a situation where a user can modify an Excel like grid schedule online. I would like to be able to show a message stating that the schedule is being modified in another window. However, how can I do this with a window open, and if the previous window was closed with SignalR. For other sessions I just want to state another user is modifying the schedule.
It might be easiest to use localStorage to communicate between multiple tabs/windows a single user has open: http://www.codediesel.com/javascript/sharing-messages-and-data-across-windows-using-localstorage/
You can listen for the storage event which is triggered on every window a user has open on your site when you call localStorage.setItem or localStorage.removeItem.
Of course, it would still make sense to use SignalR to notify other users.
If you cannot use the localStorage API for some reason, you can still use SignalR to send a message to every window the user has open by using Clients.User(userName).... inside your Hub. By default, userName should match your user's IPrincipal.Identity.Name, but you can register your own IUserIdProvider to customize this: http://www.asp.net/signalr/overview/signalr-20/hubs-api/mapping-users-to-connections#IUserIdProvider
In our web application, for each http-request there is a lot of computation that happens on back end. Output can vary from 10 sec - 1 Hour. In the mean time when it is computed, "Waiting.." is shown on the website for the respective user.
But it so happens, that a user might cut down the service in between. So what all can be done on the back end so that the computation can be stopped in between to save resources? What different tactics can be applied here?
And if better (instead of killing the thread directly), then a graceful termination policy should make wonders.
I'm not sure if this fits your scenario but here is how I have tackled this issue in the past. We were generating pdf reports for a web-app. Most reports could be generated in under 5 seconds but some would take up to an hour.
When the User clicks on generate button we redirect them to a "Generating..." dialog screen which has a sort of progress bar and a Cancel button. This also launches the generate process on the server in a separate thread (we have a worker pool). The browser then polls the server regularly via ajax to check on the progress (either update the progress bar or redirect to the display page when finished).
The synchronization at the server between the generating process and the ajax process was done via a process synchronization object. The sync-obj was a very simple class instance which could be retrieved quickly from any thread at any time via some unique string.
Both processes could update this shared sync-obj. As the report generated the repgen thread would update the sync-obj which the ajax thread would inform the browser. If the User clicked the Cancel button then the ajax thread would set the "cancel" flag in the sync-ob and the repgen thread would pick that up and break out of the generate loop.
Clearly the responsiveness of the whole process depends a lot on how frequently the repgen thread checks the sync-obj and that often comes down to how the individual report was coded.
Finally, to answer your question, if the User gets bored and goes "back" and clicks the generate button again we do not cancel the first report and start a second but rather realise that it is the same report (and the same sync-obj id) and so just let the report continue. However if that does not suit your scenario then starting a generate process could cancel the first in the same manner that the User could via the Cancel button.