How to force all user's browsers to refresh for software update - refresh

I have a number of web applications that run for a number of businesses, day in and day out.
The applications are in PHP/MySQL/JS Running on a remote apache server.
For many years, I have performed updates at late night when the software is not in use.
I would like to be able to perform updates to the software during working hours, if possible.
I have many times asked my clients to make sure they shut the software down at night, and close their browsers - but can never guarantee that they have done so.
I have a refresh timer in the JS that trigger a browser to refresh at 11:59. It will happen If the browser is still open.
But I would like able to perform this refresh at any open browser - when I want.
I have mulled over a few ways to do this - including cron and database values that can be read and reset - but:
I wonder if anyone has had success with achieving this?

You want to refresh all open browser tabs that are pointing at your xAMP-ish applications. A few questions:
Does the refresh need to be immediate, or can it be deferred? that is, do everyone's tabs need to be refreshed at the same time, regardless of user interaction; or is it acceptable to wait until the next request from each client, whenever it may be?
Can you schedule the refresh ahead of time (say, with at least 1 session-timeout interval lead-up time), or do you need a method that triggers refreshes immediately?
If you require immediate refreshes, with no ahead-of-time scheduling, you are out of luck. The only way to do this is to keep an open channel for asynchronous updates from the server to the clients, which is hard to do with plain Apache/PHP (see comet, websockets).
If you can make do with deferred refreshes (waiting until a user submits a request), you have several alternatives. For example, you can
expire all sessions (by calling a script that removes all the corresponding server-side session files; found in /var/lib/php/sessions/ in linux). Note that your users will not appreciate losing, say, their shopping-cart contents.
use JavaScript to check a client-side version value (loaded at login-time, and kept in localStorage or similar) against incoming replies from the server (which would load it from a configuration file or a DB request). If the server-side value has changed, save whatever can be saved to localStorage (to avoid the previous scenario), inform the user, and refresh the page.
Alternatively, if you can schedule the refreshes with enough fore-warning, you can include instructions in server-replies that will invoke the refresh mechanism when needed. For example, such replies could change your current "reset at 11:59:59" code to read "reset at $requested_reset_time".

As I understand the problem, you would want control over when the user sees 'fresh' content and when the cached stuff is okay. If this is right,
Add the following in your head content -
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />
Upon receiving this header, user's browser will automatically fetch a fresh content. And you can flip on/off the above lines to suit your needs. This might not be the most sophisticated way of achieving the desired functionality but worth trying.

There are a lot of things to consider before doing something like this. For example, if someone is actively working on a page, maybe filling out a form or something and you were able to refresh their window, that could create a negative user experience. I believe some of the other answers here addressed some other concerns as well.
That said, I know from working with the Launch Darkly feature flag service that it can be done. I don't understand all the inner workings, unfortunately, but my understanding is that the service uses observables to watch for updates. Observables are similar to promises, except they continuously watch for new changes to their target. You could then force a page reload (or perhaps an alert to the user, prompting one) when the target updates.

Related

AppFabric Syncing Local Caches

We have a very simple AppFabric setup where there are two clients -- lets call them Server A and Server B. Server A is also the lead cache host, and both Server A and B have a local cache enabled. We'd like to be able to make an update to an item from server B and have that change propagate to the local cache of Server A within 30 seconds (for example).
As I understand it, there appears to be two different ways of getting changes propagated to the client:
Set a timeout on the client cache to evict items every X seconds. On next request for the item it will get the item from the host cache since the local cache doesn't have the item
Enable notifications and effectively subscribe to get updates from the cache host
If my requirement is to get updates to all clients within 30 seconds then setting a timeout of less than 30 seconds on the local cache appears to be the only choice if going with option #1 above. Due to the size of the cache, this would be inefficient to evict all of the cache (99.99% of which probably hasn't changed in the last 30 seconds).
I think what we need to implement is option #2 above, but I'm not sure I understand how this works. I've read all of the msdn documentation (http://msdn.microsoft.com/en-us/library/ee808091.aspx) and have looked at some examples but it is still unclear to me whether it is really necessary to write custom code or if this is only if you want to do extra handling.
So my question is: is it necessary to add code to your existing application if want to have updates propagated to all local caches via notifications, or is the callback feature just an bonus way of adding extra handling or code if a notification is pushed down? Can I just enable Notifications and set the appropriate polling interval at the client and things will just work?
It seems like the default behavior (when Notifications are enabled) should be to pull down fresh items automatically at each polling interval.
I ran some tests and am happy to say that you do NOT need to write any code to ensure that all clients are kept in sync. If you set the following as a child element of the cluster config:
In the client config you need to set sync="NotificationBased" on the element.
The element in the client config will tell the client how often it should check for new notifications on the server. In this case, every 15 seconds the client will check for notifications and pull down any items that have changed.
I'm guessing the callback logic that you can add to your app is just in case you want to add your own special logic (like emailing the president every time an item changes in the cache).

Sustain an http connection while django processes a big request (20mins+)

I've got a django site that is producing a csv download. The content of the csv is dictated by user defined parameters. It's possible that users will set parameters that require significant thinking time on the server. I need a way of sustaining the http connection so the browser doesn't kick up an error message. I heard that it's possible to send intermittent http headers to do this. Can anyone point me in the right direction to set this up on a django site?
(unfortunatly I'm stuck with the possibility of slow reports - improving my sql won't mitigate this)
Don't do it online. Trigger an offline task, use a bit of Javascript to repeatedly call a view that checks if the task has finished, and redirect to the finished file when it's ready.
Instead of blocking the user and it's browser for 20 minutes (which is not a good idea) do the time-consuming task in the background. When the task will finish and generate the result simply notify the user so that he/she will just need to download the ready result.

How to send mass mail in Django and get status for every message?

I'm creating a web app for handling various surveys. An admin can create his own survey and ask users to fill it up. Users are defined by target groups assigned to the survey (so only user in survey's target group can fill the survey).
One of methods to define a target group is a "Token target group". An admin can decide to generate e.g. 25 tokens. After that, the survey can be accessed by anyone who uses a special link (containing the token of course).
So now to the main question:
Every token might have an e-mail address associated with itself. How can I safely send e-mails containing the access link for the survey? I might need to send a few thousand e-mails (max. 10 000 I believe). This is an extreme example and such huge mailings would be needed only occasionally.
But I also would like to be able to keep track of the e-mail message status (was it send or was there any error?). I would also like to make sure that the SMTP server doesn't block this mailing. It would also be nice if the application remained responsive :) (The task should run in background).
What is the best way to handle that problem?
As far as I'm concerned, the standard Django mailing feature won't be much help here. People report that setting up a connection and looping through messages calling send() on them takes forever. It wouldn't run "in background", so I believe that this could have negative impact on the application responsiveness, right?
I read about django-mailer, but as far as I understood the docs - it doesn't allow to keep track of the message status. Or does it?
What are my other options?
Not sure about the rest, but regardless for backgrounding the task (no matter how you eventually do it) you'll want to look for Celery
The key here is to reuse connection and to not open it again for each email. Here is a documentation on the subject.

Django/Postgres performance worsening after repeatedly processing the same query

I am running Django on Apache. I have several client computers which should call urllib2.urlopen() and send over some data which my server will process and immediately send back a reply. However, when I am testing this I found a very tricky issue. I have one client repeatedly send the same data to be processed. The first time, it takes around ~20 seconds, second time, it takes about 40 seconds, third time I get a 504 (gateway timeout) error. If I try to send data some more 504 errors randomly pop up. I am pretty sure this is an issue with Postgres as the function that processes the information makes many database calls, however, I do not know why the performance of Postgres would decline so much. I have tried several database optimization tricks, including this one: (http://stackoverflow.com/questions/1125504/django-persistent-database-connection), to no avail.
Thanks in advance.
Edit: The requests are not coming concurrently. They are coming in back to back and each query involves a lot of SELECTs and JOINs, and there are a few INSERTs and UPDATEs as well. The apache error logs show that it is just a simple timeout, where the function to process the client posted data takes over 90 seconds.
If it's really Postgres, then you should turn on the logging of slow statements in the Postgres configuration to find out which statement exactly is taking so much time.
This can be done by setting the configuration property log_min_duration.
Details are in the manual:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT
You say the function makes "many database calls" so I'd start with a very low number, or even 0 to log the duration of all statements, then you might be able to identify the slow ones.
It could also be a locking issued. Maybe the first call does not end its transaction properly and subsequent calls run into a timeout when waiting for a resource.
You can verify this by checking the system view pg_locks after the first call.
Have you checked the Apache error_logs? Have you set django DEBUG = True or ADMINS = ('email#addr.com',) so you can get a detailed error report about what the actual cause of the issue is? If so, how about pasting some information here.
Why are you certain that it's postgres? Have you done diagnostics to come to that conclusion? If so, please let us know.
Are you running apache with mod_wsgi? How many processes and threads have you allocated to your django application?
Also, 20 seconds to process the first transaction is a huge amount of time. Perhaps you could show us the view code that is causing the time out. We may be able to help there.
I sincerely doubt that it's going to be postgres alone that is causing the issue. It probably has something to do with application code, or server configuration.

Architecture for robust payment processing

Imagine 3 system components:
1. External ecommerce web service to process credit card transactions
2. Local Database to store processing results
3. Local UI (or win service) to perform payment processing of the customer order document
The external web service is obviously not transactional, so how to guarantee:
1. results to be eventually persisted to database when received from web service even in case the database is not accessible at that moment(network issue, db timeout)
2. prevent clients from processing the customer order while payment initiated by other client but results not successfully persisted to database yet(and waiting in some kind of recovery queue)
The aim is to do processing having non transactional system components and guarantee the transaction won't be repeated by other process in case of failure.
(please look at it in the context of post sell payment processing, where multiple operators might attempt manual payment processing; not web checkout application)
Ask the payment processor whether they can detect duplicate transactions based on an order ID you supply. Then if you are unable to store the response due to a database failure, you can safely resubmit the request without fear of double-charging (at least one PSP I've used returned the same response/auth code in this scenario, along with a flag to say that this was a duplicate).
Alternatively, just set a flag on your order immediately before attempting payment, and don't attempt payment if the flag was already set. If an error then occurs during payment, you can investigate and fix the data at your leisure.
I'd be reluctant to go down the route of trying to automatically cancel the order and resubmitting, as this just gets confusing (e.g. what if cancelling fails - should you retry or not?). Best to keep the logic simple so when something goes wrong you know exactly where you stand.
In any system like this, you need robust error handling and error reporting. This is doubly true when it comes to dealing with payments, where you absolutely do not want to accidentaly take someone's money and not deliver the goods.
Because you're outsourcing your payment handling to a 3rd party, you're ultimately very reliant on the gateway having robust error handling and reporting systems.
In general then, you hand off control to the payment gateway and start a task that waits for a response from the gateway, which is either 'payment accepted' or 'payment declined'. When you get that response you move onto the next step in your process and everything is good.
When you don't get a response at all (time out), or the response is invalid, then how you proceed very much depends on the payment gateway:
If the gateway supports it send a 'cancel payment' style request. If the payment cancels successfully then you probably want to send the user to a 'sorry, please try again' style page.
If the gateway doesn't support canceling, or you have no communications to the gateway then you will need to manually (in person, such as telephone) contact the 3rd party to discover what went wrong and how to proceed. To aid this you need to dump as much detail as you have to error logs, such as date/time, customer id, transaction value, product ids etc.
Once you're back on your site (and payment is accepted) then you're much more in control of errors, but in brief if you cant complete the order, then you should either dump the details to disk (such as csv file for manual handling) or contact the gateway to cancel the payment.
Its also worth having a system in place to track errors as they occur, and if an excessive number occur then consider what should happen. If its a high traffic site for example you may want to temporarily prevent further customers from placing orders whilst the issue is investigated.
Distributed messaging.
When your payment gateway returns submit a message to a durable queue that guarantees a handler will eventually get it and process it. The handler would update the database. Should failure occur at that point the handler can leave the message in the queue or repost it to the queue, or post an alternate message.
Should something occur later that invalidates the transaction, another message could be queued to "undo" the change.
There's a fair amount of buzz lately about eventual consistency and distribute messaging. NServiceBus is the new component hotness. I suggest looking into this, I know we are.