Is there a way to read session from ResourceHandler like you can do it from ServletContextHandler?
Something like this:
request.getSession(true).setAttribute("test", test)
The javax.servlet.http.HttpSession is only present on contexts that belong to a javax.servlet.ServletContext.
So no, you cannot access it from a ResourceHandler.
Why do you need to do this for a static resource, and cannot just use the more complete static file serving feature-set of the DefaultServlet found within a ServletContextHandler?
And yes, you can have it serve static resources from alternate locations.
Serving static files from alternate path in embedded Jetty
Related
I have a JETTY_HOME directory containing the unpacked Jetty distribution.
I want to disable the DefaultHandler, to achieve three things:
Prevent directory listings which would otherwise be controlled by the dirAllowed init parameter (the DefaultServlet is configured by this parameter).
Prevent web app context listings, which may reveal sensitive information such as server directory paths or other web app contexts running within the same Jetty instance.
Be sure that DefaultHandler doesn't provide access to any sensitive files in the event that I botch a web app deployment. I'm happy to implement my own static file serving servlet where necessary as an alternative to using DefaultHandler.
I could simply edit JETTY_HOME/etc/jetty.xml and remove the DefaultHandler from there. However, JETTY_HOME is supposed to be read-only, and I'm only supposed to make changes in my JETTY_BASE folder. Only modifying JETTY_BASE comes with the advantage of not having to repeatedly modify JETTY_HOME when upgrading to a newer release of Jetty.
How to I make this change from inside JETTY_BASE?
DefaultHandler is necessary, don't remove it.
Lets address each point.
Prevent directory listings which would otherwise be controlled by the dirAllowed init parameter (the DefaultHandler is configured by this parameter).
The DefaultHandler doesn't do directory listings.
That's the role of the DefaultServlet in a WebAppContext, or a ResourceService / ResourceHandler in an embedded scenario.
If you want to prevent directory listings presented by the DefaultServlet in a WebAppContext you need to configure the DefaultServlet.
You can do that with one of the following choices.
Declare the <servlet> entries in your WEB-INF/web.xml to configure the named servlet default to have a init-param of dirAllowed set to false.
This is a change in the individual webapp's own WEB-INF/web.xml
Declare a servlet context init-parameter (not servlet specific, whole context), where the key org.eclipse.jetty.servlet.Default.dirAllowed is set to value false.
This is a change in either the individual webapp's own WEB-INF/web.xml or the XML deployable (ie: ${jetty.base}/webapps/<name>.xml) for each webapp.
Provide an alternate webdefault.xml for the defaultDescriptor that configures the default behavior to be dirAllowed=false.
This is a change to the either the individual webapp XML deployable to set the WebAppContext.setDefaultDescriptor(), or the overall deployable defaults for the chosen DeploymentManager / AppProvider combo you are using. This will also require a custom ${jetty.base}/etc/<name>.xml which is your new default descriptor.
Provide a override descriptor xml that can be applied after your defaultDescriptor + webapp descriptor to configure dirAllowed=false.
This is a change to the either the individual webapp XML deployable to set the WebAppContext.setOverrideDescriptor(), or the overall deployable defaults for the chosen DeploymentManager / AppProvider combo you are using. This will also require a custom ${jetty.base}/etc/<name>.xml which is your new override descriptor.
If you are using ResourceService / ResourceHandler in an embedded-jetty scenario, you can just call ResourceService.setDirAllowed(false).
Prevent web app context listings, which may reveal sensitive information such as server directory paths or other web app contexts running within the same Jetty instance.
That's controlled by the showContexts configuration on the DefaultHandler.
There's 2 ways to control this behavior.
Option A: configure the DefaultHandler
You can add 2 files to your ${jetty.base}
New file: etc/tweak-defaulthandler.xml
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "https://www.eclipse.org/jetty/configure_10_0.dtd">
<Configure id="DefaultHandler" class="org.eclipse.jetty.server.handler.DefaultHandler">
<Set name="showContexts">false</Set>
</Configure>
Add a line to ${jetty.base}/start.d/tweaks.ini to use this XML
$ cat start.d/tweaks.ini
etc/tweak-defaulthandler.xml
Option B: just declare a ROOT context with default behavior for ROOT.
Create a ${jetty.base}/webapps/ROOT directory.
Add a ${jetty.base}/webapps/ROOT/index.html with whatever content you want.
That will be served instead of DefaultHandler creating the list of contexts.
Be sure that DefaultHandler doesn't provide access to any sensitive files in the event that I botch a web app deployment. I'm happy to implement my own static file serving servlet where necessary as an alternative to using DefaultHandler.
The DefaultHandler only serves 3 things.
/favicon.ico requests (only if a GET request)
/ (show context listing) (only if a GET request and showContexts is true)
404 Errors - all other requests that reach this handler.
You are confusing DefaultServlet / ResourceService / ResourceHandler with DefaultHandler a totally different thing.
I have an application where I want to serve static files to my customers.
To protect these files I'm using AWS cloud-front, and I've setup my distribution to require that you have a signed url to access the files.
However there is 1 file on my CDN I want to make public to everyone, no restricted access required.
I know I could make a second Cloudfront distribution without security, and serve the file through that one. However this would make the client resolve 2 separate (sub)domains.
So ideally I would like all this to work from a single Cloudfront (sub)domain, but I don't know if it's possible.
I looked into signing a url that lasts forever, but it looks like there are too many things that can "invalidate" the url before it's expire time such as the tokens expiring.
You can have a look under the "Behaviors" tab of your cloudfront distribution. There you can specify different actions based on the path that is requested.
So if you want the public path to be at /public then you can add that as a new behavior and in that same window set Restrict Viewer Access (Use Signed URLs or Signed Cookies)to No.
There should already be a Default(*) behavior. When that new behavior is added it should be added as a higher precedence than the default behavior automatically.
I am using Amazon S3 to serve static files for application hosted on heroku.I have made s3 bucket public and enabled static website hosting. Issue is i don't have any ssl certificate so i need to access it without https but when static tag creates urls for my application static files in templates it is automatically prepending http headers.How should i avoid it so i can access static files on my website without purchasing ssl?
settings.py
Custom_domain='xxx.s3-website-us-west-2-amazonaws.com'
STATIC_URL="%s/"%Custom_domain
STATICFILES_STORAGE='storages.backends.s3boto.S3BotoStorage'
Similar for media_url and default_file_storage
This might help
Django AWS S3 tutorial
You need to give the full url including protocol.
STATIC_URL="http://%s/" % Custom_domain
In fact, it wouldn't work at all without the protocol; the browser would just interpret it as a relative path in the current domain.
Note though that you can easily get a free ssl certificate from Let's Encrypt.
In AWS S3 you can upload a file and make it public. You get a URL to access the same. Also, you can enable "Static Website Hosting". Can someone clarify the difference between these 2 approaches? If I can simply upload my html pages and make them public and access them over HTTP through browsers, why do I need to enable static website hosting ??
Enabling Static Website Hosting on S3 allows you to use a custom domain name, custom error pages, index.html documents for paths that end in /, and 301 redirects.
For others who are just stumbling across this, one disadvantage of enabling Static Website Hosting is the HTTP only endpoint you get.
See relevant docs. If you can work with the limitations of simply making the files public such as no custom domain name, you get TLS for free since some browsers block HTTP links on pages served over HTTPS.
We want to serve protected media from django, using something similar to the django nginx x-accel-redirect setup.
The only problem is that the static files are not located on the public facing django/nginx machine, but in a internal machine that streams the file via http/rest api.
Currently we download the file on the nginx machine and serve it via nginx x-accel-redirect, but we want to optimize this part and looking for options. x-accel-redirect has known problems with files that are streamed from another source.
We are contemplating using django itself as a quasi buffer, but are open to other options as integrating something like whizzer/twisted, or maybe even having another service altogether.
What would be the best option for serving those static files and preserving security?
Use: http://www.allbuttonspressed.com/projects/django-filetransfers
Make your own Django storage backend for the internal machine's http/rest api, that returns
a File object, and pass that object to filetransfer's server_file function.
That's how I do it in Mayan EDMS https://github.com/rosarior/mayan/blob/master/apps/documents/views.py#L300
django-storages' backends could help you get started.
https://bitbucket.org/david/django-storages/wiki/Home
Update:
Django_resto appears to have an http based storage class
https://github.com/aaugustin/django-resto/blob/master/django_resto/storage.py#L62
I had success doing something similar using django-http-proxy. This assumes that the image server is at least as reliable as the django server.
Then in my urls, I simply mapped the url to the http proxy view, something like:
(r'^protected/.*$', 'httpproxy.views.proxy'),
Then configured PROXY_FORMAT accordingly.
Implement a simple one-shot signature system in the media machine, using any very thin (django is OK, as it does not need to get to the database) code layer, and x-accel-redirect in nginx.
In the auth machines, generate the correct signature only when the user is allowed to get the resource, and return a 302 to the signed media.
The signature could be time-based, expiring in a fraction of a second, so a sniffer can't use the URL again.
You could use lighttpd to handle the streaming. It has a nice module to protect resources with signatures: http://redmine.lighttpd.net/wiki/1/Docs:ModSecDownload
So I'm thinking you could have nginx just proxy to the streaming server (that's lighttpd).
It's pretty easy to cook up the signature, here's a python example: init.py#cl-27">https://bitbucket.org/ionelmc/django-secdownload-storage/src/be9b18701015/secdownload_storage/init.py#cl-27