Prerender.io first hit missed - ember.js

I installed prerender.io with Nginx on my Ember.js project.I use the Facebook debugger to check if the prerender is installed correctly. The problem:
Each first hit of the prerender is a fail. Unfortunately, Facebook caches this version, so it is the one that is displayed on the site.
When I ask "Fetch new scrape informations", I get a hit and the content is displayed properly.
How can I make the first try a hit?
GUESSES
Maybe there is a problem with window.prerenderReady, that is used in my project (ember-prerender)?
Maybe the Nginx configuration does not wait for the result of the caching, or the caching is too long?
INFOS
I use Nginx with the standard configuration recommended by prerender.io

Facebook can timeout if the response takes longer than 5 seconds. It sounds like your pages are taking 5+ seconds to render when being rendered on the fly. The reason it works the second time is because the page is cached at that point and returned in < 100ms.
I'd suggest trying to speed up your page loading time so that pages rendered on the fly are returned more quickly. Send and email to support#prerender.io if you want some help there! We can send you the timings of requests being made on your URLs.

Related

How to wait long enough for Django views results on Firefox?

I'm developing a website with Django 2.2, using :
Centos 7
Mozilla Firefox 60.6.0
Google Chrome 73.0.3683.86
Docker and Docker compose
The first page allows the user to submit data from a formatted file (equivalent to csv) and the second page shows the result of calculations (done row by row) in a datatable.
I have noticed a difference between Mozilla Firefox and Google Chrome:
For big files, on Chrome, the web browser waits long enough to receive and display the results of calculations. Whereas in Firefox, the web browser stops waiting for localhost and the "results" page is not loaded.
As the problem occured when the file exceeded a certain size, I have guessed that the app spent too much calculating time and Firefox stopped waiting for the response before to load the "results" page.
So I changed my view to accelerate calculations. The problem still persists with big files. With files from approximately 3.5Mo the results page os displayed or not (almost randomly).
I tried to raise "dom.max_script_run_time" in my Mozilla settings but this can't be done programatically.
I saw that Celery can be used for long time running calculations, but in my case, calculations can be performed on 1 or 3000 rows. I would like to find a solution without using Celery.
Another solution could be to use JavaScript to set a timeout when Firefox is detected as web browser, the to add a error message, I also would like to avoid this.
I expect my app to work well at least on Mozilla Firefox, Safari, Opera and Google Chrome Web browsers.
Thanks for your help !

Why Django blocks simultaneous requests within the same session?

I tried to add sleep(30) at the first line of my view. After that I opened this page in two browser tabs. The first tab loaded the page after 30 seconds, and the second one loaded it in 60 seconds. In the meantime I was able to open pages from another pc just fine. So it looks like Django blocks the concurrent requests from the same client.
This is very well for my app. And I'd like to be sure my site will work this way in the future. However I have not found any documentation or articles describing such Django behaviour. So I'm still not sure if this is a feature or just fortune. Could somebody please explain how and why this works?
What I actually need is to block the session while view is processing. Of course I can use some flags or db transactions. But I'd not like to add a feature that is already implemented in Django.
I use python 2.6.5, django 1.4, ubuntu server, nginx and uwsgi. Tried both postgresql and sqlite.
My uwsgi settings:
<uwsgi>
<pythonpath>/home/admin/app/src</pythonpath>
<app mountpoint="/">
<script>deploy.wsgi</script>
</app>
<workers>4</workers><!-- Not sure this is needed -->
<processes>2</processes>
</uwsgi>
I also got same effect with runserver command.
Actually Django does not block simultaneous requests.
If I run two browsers (for example chrome and firefox) with the same session (by copying the sessionid cookie from the first browser to the second one), blocking does not happen. So, this is a browser feature, and it's not related to Django anyhow. This means I still need to add some blocking feature by myself to make the code safe.

ColdFusion pages hang, but only after loading content

We have a Windows server running ColdFusion 8. When I load a CF page from that server in a browser, the page content is displayed almost immediately, but the connection does not close. The browser's "page loading" icon keeps spinning for another ten seconds.
I did a test where I created two files: test.cfm and test.html and loaded them side by side on the server. Each file contains only a single line of text: "This is a test." When I load each page in a browser, both pages display the text immediately, but only the CF page keeps "loading" for another ten seconds.
This behavior is making our AJAX-driven pages unusable. What is causing this behavior, and how I can fix it?
There must be some code in there trying to do something. And it may not be in the page itself, it could be in an include or the application file.
That type of spinning cursor behaviour sounds like some sort of ajax call failing after the initial page load.
One thing that would help is to turn on ColdFusion debugging and post the results here.
ColdFusion and Ajax, out of the box work very well together, and ColdFusion has an almost insanely simple page debugging tool that make it extremely easy to see the execution path of the CFM file and where the page is spending its time.
Have you restarted the CF service? If there are already hung requests or other problems then that will likely be the only way to resolve.

In Django 1.3 alpha 1, does the built-in web server cache pages (or database results) more aggressively than before?

I’m using Django version 1.3 alpha 1 (SVN-14750) to build a Django site.
I’ve got a couple of pages which display data, and allow me to edit that data. However, I seem to have to restart the built-in Django web server to see the updated data.
I don’t remember having this problem before: usually a CTRL + F5 refresh in the browser would do it. Obviously, it’s quite annoying behaviour during development, seeing up-to-date data load slower is preferable to seeing out-of-date data load instantly.
I’m using Firefox with the cache disabled (about:config, network.http.use-cache=False), so I’m reasonably sure the issue is with Django.
Web servers themselves don't do caching. It is up to the application itself to decide how (server-side) caching works. In Django's case, there are a number of options for enabling caching.
The high level though, is that Django sees a request for an URL, generates the html string in response, and stores that string in memory (or a database - depending on the cache backend you set). The next time a request comes through for that same URL, Django will check to see if that response lives in the cache, and if it does, will return that string. If it doesn't, the process repeats.
The idea behind providing #vary_on decorators, is that you change the lookup keys for finding a response in the cache. If you vary_on(user, url). the algorithm goes something like this:
1. request /users/3/Josh
2. key = str(user) + str(url)
3. response = get_from_cache(key)
4. if response is None: response = view_function()
5. save_to_cache(key, response)
6. return response
The web server has no input into this type of caching.
The other type of caching is client side. This is where the web server is configured to return certain headers for specific types of resources like static content (javascript, images etc). The browser can then analyze those headers, and decide to request the content, or to use the content stored client side. This generally doesn't apply to dynamic content however.
Ah — I still had some cache middleware enabled. Removing the following from my MIDDLEWARE_CLASSES setting in settings.py fixed it.
'django.middleware.cache.UpdateCacheMiddleware',
'django.middleware.cache.FetchFromCacheMiddleware',
(As is probably evident from the question and this answer, I don’t understand caching, Django or otherwise, very well.)

Django/IE8 Admin Interface Weirdness

Esteemed Django experts and users:
I have been using Django's admin interface for some data editing needs. I am using it on Windows Server 2008, and using django-mssql to connect to a SQL Server backend. Python 2.6.2 Django 1.1.0 final 0
As per usual w/ Django, this was fairly easy to set up, and works beautifully on Firefox, but using IE8 I intermittently get a puzzling 'Internet Explorer cannot display this webpage' when I save a record.
In the log, looks like typically on a save there's a POST request that returns a 302 status followed by a GET returning a lovely 200. This is on Firefox. On IE8 looks like sometimes POST works but GET doesn't.
So that's what I have going on. Any help w/ this will be appreciated. Thank you.
I suspect the bug is within IE8's refusal to process the redirect properly.
The 302 POST pushes to browser to the 200 GET, but if the browser never processes the 302 then the Django (or the server) will not log a 200 GET because the browser never opened the page (the server can only log what is accessed, ergo the browser is not making the call).
If you have Django behind something (IIS using FastCGI, or Apache, or something), bump up the logs to make sure there's no silent error in rendering. I had the same problem on Vista x64 Ultimate IE8 Beta 2, but compatibility mode appeared to fix the problem somewhat -- there was still some intermittently occurring refusal to redirect.
I realize this post is a bit old now, but I had the exact same symptoms recently. After a lot of digging around, I found that IE8 has issues accepting cookies with a life of less than 20 minutes.
In our Django project's settings.py we had the property SESSION_COOKIE_AGE set to 10 minutes. Once I bumped it to 20 minutes, IE8 had no problems logging in.