CloudFront versioning and invalidation when you cannot change the link - amazon-web-services

I'm using CloudFront to embed some JavaScript on another persons website. The other person installs the JavaScript by placing the link on their website. They use the link rather than the JavaScript itself so that I can change the JavaScript to push updates to them without them touching it at all.
When I push these changes to CloudFront it takes a while for the changes to propagate which isn't desirable. What the best method for changing the file in this case?
Here are the methods I tried:
Let the changes propagate on their own. This is bad because it can theoretically take up to 24 hours.
Invalidate the cache using the API requests or the UI. This is okay, but it still takes 10-30 minutes.
Version the files (file_v1.js) and have the other person change the link manually.
Is there another solution?

There is no third option here. If changing the link is an option for you, and other person's site is NOT served through any CDN it is the fastest way.

Related

Access/Substitute CreatePage outside of Gatsby-node.js

I am creating a forum using Gatsby
I have develop a form that users can use to create threads to add to the forum in a page called create.js which which sends the data to an external DB.
Once, the user has submitted the thread, I want to create a new page using a template, normally I would use in Gatsby-node.js; according to the Gatsby Docs Gatsby-node.js is only run once on deployment.
Is there another way that I can access CreatePage() outside of Gatsby-node.js or is there another function I am missing?
Ultimately I want the new page to available in the Gatsby application, without redeploying, after the user has created the necessary content.
The way Gatsby works is that all pages need to be generated at build time. You cannot add new pages without triggering a new build.
Gatsby is not a suitable platform for a forum since content changes hundreds or thousands of times a day. Gatsby is intended for content that changes infrequently such as blogs (which might update a few times a day).
I order to generate pages without using CreatePage in Gatsby-node.js, Gatsby advises to use #reach/Router and matchPage to extend the client application's router, we call this functionality (Client-only Routes)
more info here

How to use react build websites without using react-router ?

I'm building a website with Django and react, and since Django itself has a routing system, and I don't want to discard that, so I decide not to use javascript routing libraries.
I'm using webpack to bundle my files, but since I'm not using react router, there's a lot of webpack entry files, and a lot of bundled files (almost one per page), and I'm not sure if this is a 'correct' way.
And since there's one javascript file per page, the states or other things between different pages are not shared, every page is independent of each other. Can I have some 'shared' things without using react-router?
I know Facebook itself and Airbnb don't use react-router either, so how do they use react? How do they handle a lot of bundled files?
Can anyone work for a company that does not use react-router share your company's solutions?
The proper way to accomplish this is in the same way as you do it using vanilla javascript or some library like jQuery.
You don't manage a state in the front end, you grab the state server side and you put it in the HTML and then you use it with javascript.
React.js isn't different and if you're using redux you could put that data directly in the initial state of every page/section of your whole webpage.
<>
Yes. But in order to share in the way you imagine, you do have to make your app single paged. That is, you don't link around to different URL's. Any view change would change at most the #anchor part of the URL, but needn't do anything to the URL - just use Javascript logic to change what component(s) get rendered. As long as the base part of your URL stays on the same page, your shared objects will stick around.

Uploading multiple files in Liferay in application with two nodes

I'm using Liferay 6.1.0 GA1.
My applications runs on two tomcats. I have varnish in front of them. Varnish redirect to particular node when cookie is set on it.
When I'm trying to upload multiples files on Firefox, it loses this cookie (on Chrome it works just fine).
My idea was, to extend URL - add parameter that can later be filtered in Varnish. But I cannot find where should I add this, that Flash can later copy this properly.
Any other ideas that will be helpful are welcome as well.
P.S. Sorry for bad english.
"Loosing a cookie" means that it explicitly is set to another value, or the hostname changes. I suggest you use Firebug or the built-in Developer tools (hit F12) and monitor the requests and responses that go through the line. Pay attention to Set-Cookie directives in the response headers as well as the Host directive in the request headers. This should give some hints where they're going.
It's hard to give more specific advice with the level of detail you provide.

Serving static files with logic in django (keeping a downloadcount)

I have a site which enables the user to download certain files. However I want to keep a download count for each file so going the usual way by putting the static files on a different subdomain and then letting apache do the heavy lifting is not a way as well as HttpResponseRedirecting the user to a subdomain isn't good because then the user 'sees' the proper download url and can therefore download the file without incrementing the download count. I could just build a view which then serve()s the file however i am worried about that "big fat disclaimer". How would you/did you implement this? I am quite shure I am not the only one with that problem.
About the Platform: I am using apache and mod_wsgi.
Thank you
We've implemented a system where we needed to control download access to (largish) static files, naturally not wanting Django to serve them itself. We came up with a scheme whereby the Django app, after validating that the user was allowed to download the file (or increment a counter, in your case) we would create a randomly-named symlink to the file, which Apache had access to (be careful: make sure directory indexing is off etc), and then redirect the user to that symlink to be served by Apache.
We have a "cleanup" cronjob that cleans up symlink a minute after they're created, so if they want to download it again, they have to go through Django and have it counted again. Now, theoretically they could download it more than once in that time, but is that likely to happen? You could clean up more than every minute: Apache just needs the symlink to exist at the beginning of the download, not throughout the whole thing.
I'd be curious to know how others address this problem, as I agree with the OP that it is a common scenario.
psj's answer is definitely one viable option. Another option you should investigate is putting a reverse-proxy server in-front of apache like Perlbal which supports "X-REPROXY-URL" headers.
Once you have the reverse-proxy server in place, instead of sending the user a redirect response, you can send a response with the "X-REPROXY-URL" header set to a URL where the proxy server can access but the user can't. The proxy server will then read in the file from the location you sent in the header, and then serve it out to your client. They'll do so in an efficient way and since all your Django app server needs to send is a response with a header set, it is free to handle another request.
The easiest way to do this is to use Apache's X-Sendfile header. Just set the value of the header to the file path and Apache will send the file for you. This blog post has some more details: http://francoisgaudin.com/2011/03/13/serving-static-files-with-apache-while-controlling-access-with-django/ .
I did this with django-counter not to long ago. Lets you keep track of the counts in the admin.
http://github.com/svetlyak40wt/django-counter/

In Django, can I always force browser and provider caches to load new pages with a global setting?

I have a handful of users on a server. After updating the site, they don't see the new pages. Is there a way to globally force their browsers and providers to display the new page? Maybe from settings.py? I see there are decorators that look like they do this on a function level.
Depends on browser and cache settings.
There may be no way to tell browsers to do so (as pages are cached, they are not even talking to server, so there is nothing You can do there).
Good trick is to set Vary: Cookie header, so You can always invalidate cache (by changing cookie somewhere) in case of need.
One way to force the browser to load a new page rather than loading the cached version is to change the file name. You could add a date/time to the file name and use a rewrite rule (assuming Apache web server here) to get the new page.
This site gives a quick explanation: http://www.askapache.com/htaccess/mod_rewrite-fix-for-caching-updated-files.html
and google will show many more.
you may also have to examine your cache control headers.