Can I reload a page if it exceeds the timeout? IMACROS - imacros

I have a small need with a macro that facilitated me, I need that in case the page not load by connection loss the macro reloads the page and continue with his course, or failing that the Macro will restart the iteration. Any ideas? thanks in advance.

Related

Is there a better way to see data change posted on index.html webserver (NGINX) besides refreshing the page every second?

I have made a flutter app which gives notifications whenever I get an error on the machine running embedded. The way I achieved this is by posting the error on the hosted webserver's page every time I have an error, and then the moment the error is resolved, removing it from the page. Meanwhile, the flutter app connects to this webserver page via the ip address using http request and then refreshes it every second to see if there is any data. If there is, then it sends a notfication on the phone, and starts comparing the data every second to see if any change happens. If the data is changed, another notification is sent else it keeps on refreshing the page and reading it.
The program I used to write the error to this webserver is through C++ code which opens the index.html file, performs write operation and then closes the file every time a new error occurs. The moment, the error is resolved, the file is again opened and the record is deleted making the index.html blank and ready to receive another error.
I want to know if there is any better way to achieve this so that the page only refreshes on new data arrival. As I have been told, refreshing the webpage every second can cause extra pressure on the embedded processor it has been hosted on. Any leads will be appreciated.
Thanks for your time ^^.

How to force all user's browsers to refresh for software update

I have a number of web applications that run for a number of businesses, day in and day out.
The applications are in PHP/MySQL/JS Running on a remote apache server.
For many years, I have performed updates at late night when the software is not in use.
I would like to be able to perform updates to the software during working hours, if possible.
I have many times asked my clients to make sure they shut the software down at night, and close their browsers - but can never guarantee that they have done so.
I have a refresh timer in the JS that trigger a browser to refresh at 11:59. It will happen If the browser is still open.
But I would like able to perform this refresh at any open browser - when I want.
I have mulled over a few ways to do this - including cron and database values that can be read and reset - but:
I wonder if anyone has had success with achieving this?
You want to refresh all open browser tabs that are pointing at your xAMP-ish applications. A few questions:
Does the refresh need to be immediate, or can it be deferred? that is, do everyone's tabs need to be refreshed at the same time, regardless of user interaction; or is it acceptable to wait until the next request from each client, whenever it may be?
Can you schedule the refresh ahead of time (say, with at least 1 session-timeout interval lead-up time), or do you need a method that triggers refreshes immediately?
If you require immediate refreshes, with no ahead-of-time scheduling, you are out of luck. The only way to do this is to keep an open channel for asynchronous updates from the server to the clients, which is hard to do with plain Apache/PHP (see comet, websockets).
If you can make do with deferred refreshes (waiting until a user submits a request), you have several alternatives. For example, you can
expire all sessions (by calling a script that removes all the corresponding server-side session files; found in /var/lib/php/sessions/ in linux). Note that your users will not appreciate losing, say, their shopping-cart contents.
use JavaScript to check a client-side version value (loaded at login-time, and kept in localStorage or similar) against incoming replies from the server (which would load it from a configuration file or a DB request). If the server-side value has changed, save whatever can be saved to localStorage (to avoid the previous scenario), inform the user, and refresh the page.
Alternatively, if you can schedule the refreshes with enough fore-warning, you can include instructions in server-replies that will invoke the refresh mechanism when needed. For example, such replies could change your current "reset at 11:59:59" code to read "reset at $requested_reset_time".
As I understand the problem, you would want control over when the user sees 'fresh' content and when the cached stuff is okay. If this is right,
Add the following in your head content -
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />
Upon receiving this header, user's browser will automatically fetch a fresh content. And you can flip on/off the above lines to suit your needs. This might not be the most sophisticated way of achieving the desired functionality but worth trying.
There are a lot of things to consider before doing something like this. For example, if someone is actively working on a page, maybe filling out a form or something and you were able to refresh their window, that could create a negative user experience. I believe some of the other answers here addressed some other concerns as well.
That said, I know from working with the Launch Darkly feature flag service that it can be done. I don't understand all the inner workings, unfortunately, but my understanding is that the service uses observables to watch for updates. Observables are similar to promises, except they continuously watch for new changes to their target. You could then force a page reload (or perhaps an alert to the user, prompting one) when the target updates.

Coldfusion scheduled task not working in CF10-AWS [duplicate]

I have a series of scheduled tasks that all run at various times of the day. Since the migration from Coldfusion version 7 to 10, these tasks have stopped running.
When I check the box, that outputs the results to a file, I get a text file that says nothing more than "Connection Failure". I have tried everything imaginable regarding the username and password for the task. It makes no difference. When I run the CFM page in my browser, the
page works correctly and generates an email just like it should. I just
can't make it run as a scheduled event.
Is the scheduled task folder has any check for the session or anything? I mean is the scheduled task folder is accessible without login? Please try with removing all the redirect rules for the application. That might work.
For me the requests were timing out. I increased the timeout in the administrator and that solved it. Doing a cfhttp in a test file and dumping the results helped me troubleshoot it.

Sustain an http connection while django processes a big request (20mins+)

I've got a django site that is producing a csv download. The content of the csv is dictated by user defined parameters. It's possible that users will set parameters that require significant thinking time on the server. I need a way of sustaining the http connection so the browser doesn't kick up an error message. I heard that it's possible to send intermittent http headers to do this. Can anyone point me in the right direction to set this up on a django site?
(unfortunatly I'm stuck with the possibility of slow reports - improving my sql won't mitigate this)
Don't do it online. Trigger an offline task, use a bit of Javascript to repeatedly call a view that checks if the task has finished, and redirect to the finished file when it's ready.
Instead of blocking the user and it's browser for 20 minutes (which is not a good idea) do the time-consuming task in the background. When the task will finish and generate the result simply notify the user so that he/she will just need to download the ready result.

Django/Postgres performance worsening after repeatedly processing the same query

I am running Django on Apache. I have several client computers which should call urllib2.urlopen() and send over some data which my server will process and immediately send back a reply. However, when I am testing this I found a very tricky issue. I have one client repeatedly send the same data to be processed. The first time, it takes around ~20 seconds, second time, it takes about 40 seconds, third time I get a 504 (gateway timeout) error. If I try to send data some more 504 errors randomly pop up. I am pretty sure this is an issue with Postgres as the function that processes the information makes many database calls, however, I do not know why the performance of Postgres would decline so much. I have tried several database optimization tricks, including this one: (http://stackoverflow.com/questions/1125504/django-persistent-database-connection), to no avail.
Thanks in advance.
Edit: The requests are not coming concurrently. They are coming in back to back and each query involves a lot of SELECTs and JOINs, and there are a few INSERTs and UPDATEs as well. The apache error logs show that it is just a simple timeout, where the function to process the client posted data takes over 90 seconds.
If it's really Postgres, then you should turn on the logging of slow statements in the Postgres configuration to find out which statement exactly is taking so much time.
This can be done by setting the configuration property log_min_duration.
Details are in the manual:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT
You say the function makes "many database calls" so I'd start with a very low number, or even 0 to log the duration of all statements, then you might be able to identify the slow ones.
It could also be a locking issued. Maybe the first call does not end its transaction properly and subsequent calls run into a timeout when waiting for a resource.
You can verify this by checking the system view pg_locks after the first call.
Have you checked the Apache error_logs? Have you set django DEBUG = True or ADMINS = ('email#addr.com',) so you can get a detailed error report about what the actual cause of the issue is? If so, how about pasting some information here.
Why are you certain that it's postgres? Have you done diagnostics to come to that conclusion? If so, please let us know.
Are you running apache with mod_wsgi? How many processes and threads have you allocated to your django application?
Also, 20 seconds to process the first transaction is a huge amount of time. Perhaps you could show us the view code that is causing the time out. We may be able to help there.
I sincerely doubt that it's going to be postgres alone that is causing the issue. It probably has something to do with application code, or server configuration.