I have a rails (4.1) app running on Heroku with cloudflare as the CDN. In the error logs from NewRelic I see a constant trickle of requests for expired css and js assets, primarily application-<fingerprint>.js and application-<fingerprint>.css(with fingerprints that have been expired).
I am wondering about a solution to redirect these requests to the current asset but I am uncertain if this is a good/safe thing to do.
In my routes I'd add
get "assets/:asset_name" => "assets#show"
and then add an assets_controller.rb with:
class AssetsController < ApplicationController
skip_before_action :authenticate_user!
skip_before_action :verify_authenticity_token, :only => [:show]
def show
begin
asset_name = params[:asset_name].gsub(/-[0-9a-f]{32}$/, "") << ".#{params[:format]}"
if ["css", "js"].include?(params[:format])
redirect_to "/assets/" + Rails.application.assets.find_asset(asset_name).digest_path
else
return asset_not_found!
end
rescue
return asset_not_found!
end
end
private
def asset_not_found!
render :text => "asset #{params[:asset_name]}.#{params[:format]} not found", :status => 404
end
end
I've tried this out on a stage environment and it works but I'm not sure if this is the right way.
In particular the need to have skip_before_action :verify_authenticity_token bothers me, but without it requests for .js assets result in a InvalidCrossOriginRequest error.
I only see requests for expired css and js assets, not for any image assets, hence the above check for the request format being either "css" or "js", but maybe that's an unnecessary step.
So my question is; would doing this be bad practice? Is there a better way to handle requests for expired assets?
i consider this a bad practice.
not particular the way that you implemented id (even though i think there are better ways using rack middlewares), but more because of the fact that you should not redirect those expired assets anywhere.
if a user requests a css or js file with a stale fingerprint they likely have a html document that is out of date and you will probably want him to reload what he has: "there is a new version, please reload your site".
deploying is taken care of at heroku since sprockets stores up to 3 versions of the assets for you https://devcenter.heroku.com/articles/rails-4-asset-pipeline#only-generate-digest-assets
Related
I am developing a ExtJS application that uses a Django-rest-framework service. I am using CORS headers to allow fetching the data from the service (https://github.com/OttoYiu/django-cors-headers).
What happens is that at a point in time I want to change the URL from the store. And when I do that I get the following error:
XMLHttpRequest cannot load http://10.98.0.241:8000/reacsearch/as?_dc=1418831884352&page=1&start=0&limit=25. The request was redirected to 'http://10.98.0.241:8000/reacsearch/as/?_dc=1418831884352&page=1&start=0&limit=25', which is disallowed for cross-origin requests that require preflight.
In the settings.oy I define the following properties for the CORS
CORS_ALLOW_METHODS = (
'GET',
'OPTIONS'
)
CORS_ORIGIN_ALLOW_ALL = True
This works fine when I use URLs to list all the elements in my database, however when I change the store for another URL I get the error above. Also the link works fine in the browser.
The store url change is made this way:
var store = Ext.getStore(storeName);
store.getProxy().setUrl(newURL);
store.load();
The difference between the views, is that the two that work on the application are viewsets, while the other is just a generic list
class Example1viewset(viewsets.ModelViewSet):
"""
API endpoing that allows metabolites to be viewed.
"""
queryset = examples1.objects.all()
serializer_class = Example1Serializer
class Example1SearchList(generics.ListAPIView):
serializer_class = Example1Serializer
def get_queryset(self):
queryset = Example.objects.all()
if 'attr' in self.kwargs:
queryset = queryset.filter(Q(attribute1__contains=self.kwargs['attr']) | Q(attribute2__contains=self.kwargs['abbr']))
return queryset
Like I mentioned both examples work fine in the browser (even accessing through other computers in the network), however in the application when changing the URL of the store I get the CORS error. Does anyone has any idea why this is happening?
Thank you.
Edit:
Just for clarification, the problem is not in changing the url of the store. As I tried to set those urls as defaults, but they are not working when accessing from the application.
My urls.py file:
router = routers.DefaultRouter()
router.register(r'example', views.Example1ViewSet)
# Wire up our API using automatic URL routing.
# Additionally, we include login URLs for the browsable API.
urlpatterns = [
url(r'^', include(router.urls)),
url(r'^reacsearch/(?P<attr>.+)/$', Example1SearchList.as_view()),
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework'))
Can it be that the problem is related with the fact that I am not adding the search list to the router?
Edit2
Problem solved since I was trying to fetch data from a different domain. I changed the type of store to jsonp in Extjs, and I also allowed my rest service to render data as jsonp.
Just a reminder if anyone comes accross this same problem, it is necessary to add ?format=jsonp to the store url:
http://my/url/?format=jsonp
Since it looks like an alternate solution was found, I'll explain what the issue appeared to be as well as why the alternative works.
XMLHttpRequest cannot load first url. The request was redirected to 'second url', which is disallowed for cross-origin requests that require preflight.
The issue here is that you are telling Django to enforce the trailing slash, which makes it automatically redirect urls without a trailing slash to urls with a trailing slash, assuming that one exists. This is why, as stated in the error, the request was redirected to the second url, which you can tell has the missing trailing slash. This is controlled by the APPEND_SLASH Django setting which is True by default.
The problem is that when CORS is doing a preflight request, which is what allows it to determine if the request can be made, there must be a valid response at the requested URL. Because you are redirecting the request, the preflight request fails and you're stuck without your information.
You can fix this by adding the trailing slash in your code. There appear to be a few solutions for doing this with ext, but I personally can't recommend a specific one. You can also manually set the url to use the trailing slash, which sounds like what you were doing previously.
Or you can use JSONP...
You've found the alternative solution, which is to use JSONP to make the request instead of relying on CORS. This gets around the preflight issue and works in all major browsers, but there are some drawbacks to consider. You can find more information on CORS vs JSONP by looking around.
You're going to need CORS if you want to push any changes to your API, as JSONP only supports GET requests. There are other advantages, such as the ability to abort requests, that also comes with CORS.
Using Rails 4 and Devise 3, I would like to have different registration pages based on the URL my user is given.
As an example, each of the following should be directed to a different view that acts as devise registration.
www.mydomain.com <-- current root to registrations#new
www.mydomain.com/user_type_1
www.mydomain.com/user_type_2
www.mydomain.com/user_type_3
How would I do this? I can copy app/views/devise/registrations/new.html.erb to capture the form but how would I make the routing work?
My routes are currently set up as such (I close each session so the user can sign up a friend, but that is not relevant to this question)
devise_scope :user do
authenticated :user do
root :to => 'devise/sessions#destroy', as: :authenticated_root
end
unauthenticated :user do
root :to => 'devise/registrations#new', as: :unauthenticated_root
end
end
So you want three different url paths that point to three different views, but you want the forms to all send their info to the same REST endpoint in the same controller (users#create)? That sounds simple. You have GET requests to get the html/erb files for each registration page (welcome#index, welcome#cool, welcome#coolest), and routes for each to send the GET request to the right controller action.
Then you set up the forms to all send their info to POST to users#new, and one route from there.
Does that make sense?
I'm implementing Kickstarter's Rack-attack in my rails app.
The whitelist/blacklist filtering is working properly, but I'm having issues with using Allow2Ban to lock out ip addresses that are hammering my sign_in (Devise) page. Note: im testing this locally and have removed localhost from the whitelist.
# Lockout IP addresses that are hammering your login page.
# After 3 requests in 1 minute, block all requests from that IP for 1 hour.
Rack::Attack.blacklist('allow2ban login scrapers') do |req|
# `filter` returns false value if request is to your login page (but still
# increments the count) so request below the limit are not blocked until
# they hit the limit. At that point, filter will return true and block.
Rack::Attack::Allow2Ban.filter(req.ip, :maxretry => 3, :findtime => 1.minute, :bantime => 1.hour) do
# The count for the IP is incremented if the return value is truthy.
req.path == '/sign_in' and req.post?
end
end
In the Rack-attack documentation, it clearly states that caching is required for throttling functionality, ie:
Rack::Attack.throttle('req/ip', :limit => 5, :period => 1.second) do |req| )
, but it doesn't state this for Allow2Ban. Anyone know if cache is required for Allow2Ban, or am I implementing incorrectly with the code above on a Devise sign_in page
Yes, Allow2Ban and Fail2Ban definitely need chaching (in https://github.com/kickstarter/rack-attack/blob/master/lib/rack/attack/fail2ban.rb you can see how and why).
Btw. I suggest to use Redis as cache because it ensures that your application blocks an IP address even if you are using more than one application node. If you are using Rails cache in a multi-application node scenario, your filters will be managed per instance, which is not what you would want I assume.
I have both a Django app and a Angular JS app hosted at different end-points. Obviously in order for XHR requests to work I need to set the csrf token within Angular, which is easy enough to do when Angular is served by Django, but not so much when independent.
Here is my code so far:
angular.module('App', [
'ngCookies',
])
.run(['$rootScope', '$http', '$cookies',
function($rootScope, $http, $cookies){
// Set the CSRF header token to match Django
$http.defaults.headers.post['X-CSRFToken'] = $cookies['csrftoken'];
// Bootstrap
$http.get('http://127.0.0.1:8000/test/').success(function(resp){
console.log($cookies['csrftoken']);
});
}
])
It seems that $cookies['csrftoken'] is always undefined, and I assume I have to retrieve this somehow but can't find any resources as to how this process works.
Can anyone point me in the right direction?
Cookies are only accessible on the same origin, so accessing from another domain won't share the CSRF Token through cookies, you're going to have to find another way to introduce the cookie (such as with Django's template tag).
Second, your example looks likes its trying to read a Cookie from the $http.get() call. The $cookie service collects Cookies from when the document is loaded (stored document.cookie) and the resulting cookies are not accessible from Ajax/XHR calls cross-domain.
You can use this:
app = angular.module("App", []);
app.run(function($http) {
$http.defaults.headers.post['X-CSRFToken'] = $.cookie('csrftoken');
});
where $.cookie comes from jQuery Cookie plugin.
This question flows from the answer to:How does one set up multiple accounts with separate databases for Django on one server?
I haven't seen anything like this on Google or elsewhere (perhaps I have the wrong vocabulary), so I think input could be a valuable addition to the internet discourse.
How could one configure a server likeso:
One installation of Lighttpd
Multiple Django projects running as FastCGI
The Django projects may be added/removed at will, and ought not to require restarting the webserver
Transparent redirection of all requests/responses to a particular Django installation depending on the current user
I.e. Given Django projects (with corresponding FastCGI socket):
Bob (/tmp/bob.fcgi)
Sue (/tmp/sue.fcgi)
Joe (/tmp/joe.fcgi)
The Django projects being started with a (oversimplified) script likeso:
#!/bin/sh
NAME=bob
SOCKET=/tmp/$NAME.fcgi
PROTO=fcgi
DAEMON=true
/django_projects/$NAME/manage.py runfcgi protocol=$PROTO socket=$SOCKET
daemonize=$DAEMON
I want traffic to http://www.example.com/ to direct the request to the correct Django application depending on the user that is logged in.
In other words, http://www.example.com should come "be" /tmp/bob.fcgi if bob is logged in, /tmp/joe.fcgi if joe is logged in, /tmp/sue.fcgi if sue is logged in. If no-one is logged in, it should redirect to a login page.
I've contemplated a demultiplexing "plexer" FastCGI script with the following algorithm:
If the cookie $PLEX is set, pipe request to /tmp/$PLEX.fcgi
Otherwise redirect to login page (which sets the cookie PLEX based on a many-to-one mapping of Username => PLEX)
Of course as a matter of security $PLEX should be taint checked, and $PLEX shouldn't give rise to any presumption of trust.
A Lighttpd configuration would be likeso (though Apache, Nginx, etc. could be used just as easily):
fastcgi.server = ( "plexer.fcgi" =>
( "localhost" =>
(
"socket" => "/tmp/plexer.fcgi",
"check-local" => "disable"
)
)
)
Input and thoughts, helpful links, and to know how to properly implement the FastCGI plexer would all be appreciated.
Thank you.
Here's roughly how I solved this:
In lighttpd.conf
$SERVER["socket"] == "localhost:81" {
include_shell "/opt/bin/lighttpd_conf.py"
}
And corresponding lighttpd_conf.py:
#!/usr/bin/python
import fileinput
ACCOUNT_LIST_FILE = "/opt/servers/account_list.txt"
for user in fileinput.input(ACCOUNT_LIST_FILE):
print """
$HTTP[\"url\"] =~ \"^/%s/\" {
scgi.server = ( \"/\" =>
(
(
\"socket\" => \"/tmp/user-socket-%s.scgi\",
\"check-local\" => \"disable\",
)
)
)
}
""" % (user, user)
Where ACCOUNT_LIST_FILE contains a number of accounts, e.g.
abc1
abc2
abc3
The server will map http://example.com/abc1 to /tmp/user-socket-abc1.scgi, where presumably a Django instance for user abc1 is talking SCGI.
One must obviously perform some sort of taint checking on the names of accounts (I generate these).