I'm trying to develop a PWA for our sites. In production and staging, we serve everything from one domain. However, in development on a local machine we serve HTML from one port using Django server eg
http://localhost:8000
And the assets (including JS) using Grunt server from another port:
http://localhost:8001
The problem is that the scope of the service workers is therefore only limited to assets, which is useless, I want to offline-cache pages on the 8000-port origin.
I have somewhat been able to go around this by serving the service worker as a custom view in Django:
# urls.py
url(r'^(?P<scope>.*)sw\.js', service_worker_handler)
# views.py
def service_worker_handler(request, scope='/'):
return HttpResponse(render_to_string('assets/sw.js', {
'scope': scope,
}), content_type="application/x-javascript")
However, I do not think this is a good solution. This code sets up custom routing rules which are not necessary for production at all.
What I'm looking for is a local fix using a proxy, or something else that would let me serve the service worker with grunt like all the other assets.
I believe this resource can be of help to you: https://googlechrome.github.io/samples/service-worker/foreign-fetch/
Basically you host the Service worker on the port 8001 and the server handles it as shown in the example.
Then you fetch it from there.
The Foreign fetch is described more in details here: https://developers.google.com/web/updates/2016/09/foreign-fetch
Related
I am writing a single page app with React for educational purposes. My React-Router v4 BrowserRouter handles client side routing correctly on CodeSandbox but not locally. In this case, the local server is the webstorm built-in devserver. HashRouter works locally but BrowserRouter does not.
Functioning properly: https://codesandbox.io/s/j71nwp9469
You are likely serving your app on the built-in webserver (localhost:63342), right? Internal web server returns 404 when using 'absolute' URLs (the ones starting with slash) as it serves files from localhost:port/project_name and not from localhost:port. That's why you have to make sure to change all URLs from absolute to the relative ones.
There is no way to set up the internal webserver to use project root as server document root. But you can configure it to use URLs like http://<host name>:<port> where the 'host name' is a name specified in hosts file, like 127.0.0.1 myhostName. See https://youtrack.jetbrains.com/issue/WEB-8988#comment=27-577559.
The solution was to understand how push state routing and the history API works. It is necessary to proxy requests through the index page when serving Single Page Applications that utilize the HTML5 History API.
The Webstorm dev server is not expected to include this feature, therefore the mention of Webstorm in this thread was a mistake.
There are multiple libraries of < 20 lines which do this for us, or it can easily be hand coded.
First of all, google or SO search didn't help me: lots of tips regarding django's staticfiles, which I believe are not relevant here.
I have inherited a project consisting of:
Django backend in form of API returning JSON responses only;
standard Swampdragon deployment pushing realtime updates to frontend; very little configuration has been done here;
Frontend webapp built on Backbone and marionette.js, compiled and minified by Grunt.
My problem is: the frontend needs to know addresses for swampdragon and django servers; right now those values are hardcoded, so there is for example a Backbone model with lines like:
url: function() {
return App.BACKEND_URL+'settings/map';
}
Why hardcoded: backend can be served on any port or have a subdomain to itself; frontend is static and normally would be simply thrown into /var/www (for Apache) or would use some very simple nginx config. Both will be served from the same place, but there is no guarantee the port numbers or subdomains would match.
Idea number 1: try to guess what BACKEND_URL is from javascript, by taking window.location.host and appending standard port. That's hackish and error prone.
Idea number 2: move frontend to Django and make it ask for swampdragon credentials (they would be sent in the context of home view). Problem with that is, the frontend files are compiled by grunt. So where Django would kindly expect something like:
<script src="{% static 'scripts/vendor/modernizr.js' %}"></script>
I actually have
<script src="scripts/vendor/a8bcb0b6.modernizr.js"></script>
Where 'a8bcb0b6' is grunt's hash/version number and will be regenerated during next minification/build. Do I need to add additional logic to get rid of such stuff and copy grunt's output directory to django's static and template dirs?
Or is there another way to make this work, the right one, I am missing?
Your architecture is already clean, no need to make Django know about grunt or serve static files, and no need to use JS hacks to guess port numbers
Reverse Proxy
Use a reverse proxy like nginx or any other web server you like as a front end to both the static files and the REST API.
In computer networks, a reverse proxy is a type of proxy server that
retrieves resources on behalf of a client from one or more servers.
These resources are then returned to the client as though they
originated from the proxy server itself. (Wikipedia)
I will outline the important aspects without going into too much detail:
URL for the REST API
We make configs so that nginx will forward the API requests to Django
location /api {
proxy_pass http://127.0.0.1:8000; # assumes Django listens here
proxy_set_header Host $http_host; # preserve host info
}
So the above assumes your Django REST is mapped to /api and runs on port 8000 (e.g. you can run gunicorn on that port, or any other server you like)
http://nginx.org/en/docs/http/ngx_http_proxy_module.html
URL for our front end app
Next nginx will serve the static files that come out of grunt, by simply pointing it to the static folder
location / { alias /app/static/; }
The above assumes your static resources are in /app/static/ folder (like index.html, your CSS, JS etc). So this is primarily to load your BackboneJS app.
Django static files
Next step is not required, but if you have static files that you use with the Django app (static files that are generated with ./manage.py collectstatic, e.g. the django admin or the UI of Django REST Framework etc), simply map according to your Django settings.py STATIC_URL and STATIC_ROOT
location /static { alias /app/django_static_root/; }
/static and django_static_root being the STATIC_URL and STATIC_ROOT respectively
To sum up
So e.g. when you hit example.com/, nginx simply serves up the static files, then when a JS script makes REST call to /api, it gets trapped in the /api nginx location and gets forwarded to Django
End result is, example.com/ and example.com/api both hit the same front end web server, which proxies them to the right places
So there you have it, reserve proxying solves your ports and subdomain issues (and many others, like slow static files from Django and same-origin policies in web browsers and firewalls not liking anything besides default HTTP and HTTPS ports)
I have two instances of Odoo in a server in the cloud. If I make the following steps I get "Internal Server Error":
I make login in the first instance (http://111.222.33.44:3333)
I close the session
I load the address of the second instance in the same browser (http://111.222.33.44:4444)
If I want to work in the second instance (in another port), I need to remove the browser cookies first to acces to the other Odoo instance. If do this everything works fine.
If I load them in differents browsers (Firefox and Chromium) at the same time, they work well as well.
It's not a NginX issue because I tried with and without it.
Is there a way to solve this permanently? Is this the expected behaviour?
If you have access to the sourcecode you can change this file like shown below and check if the issue is solved or not.
addons/web/controllers/main.py
if db != request.session.db:
request.session.logout()
request.session.db = db
abort_and_redirect(request.httprequest.url)
And delete --> request.session.db = db
which is below this IF statement.
Try following changes in:
openerp/addons/base/ir/ir_http.py
In method _handle_exception somewhere around line 140 you will find this piece of code:
attach = self._serve_attachment()
if attach:
return attach
Replace it with:
if isinstance(exception, werkzeug.exceptions.HTTPException) and exception.code == 404:
attach = self._serve_attachment()
if attach:
return attach
You can perfectly well serve all the databases with a single OpenERP server on your machine. Unfortunately you did not mention what error you were seeing and what you expected as a result - makes it a bit harder to help you ;-)
Anyway, here are some random ideas based on the information you provided:
If you have a problem with OpenERP not listening on all interfaces, try to specify 0.0.0.0 as the xmlrpc_interface in the configuration file, this should have OpenERP listen on 8069 on all IPs.
Note that Apache is not relevant if you're connecting to e.g. http://www.sample.com:8069/?db=openerp because you're directly connecting to OpenERP. If you want to go through Apache, you need to setup ReverseProxy rules in your vhost configs, and OpenERP does not need to listen to all public IPs then.
OpenERP 6.1 and later can autodetect the database name based on the virtual host name, and filter the name of the available databases: you need to start it with the --db-filter parameter, which represents a pattern used to filter the list of available databases. %h represents the domain name and %d is the first domain component of that domain. So for example with --db-filter=^%d$ I will only see the test database if I end up on the server using http://test.example.com:8069. If there's only one database match, the list is not displayed and the user will directly end up on the right database. This works even behind Apache reverse proxies if you make sure that OpenERP see the external hostname, i.e. by setting a X-Forwarded-Host header in your Apache proxy config and enabling the --proxy mode of OpenERP.
The port reuse problem comes because you are trying to start multiple OpenERP servers on the same interface/port combination. This is simply not possible unless you are careful to start just one server per IP with the IP set in the xmlrpc_interface parameter, and I don't think you need that. The named-based virtual hosts that Apache supports are all handled by a single master process that listens on port 80 on all the interfaces. If you want to do the same with OpenERP you only need to start one OpenERP server for all your domains, and make it listen on 0.0.0.0, port 8069, as I explained above.
On top of that it's not clear what you would have set differently in the various config files. Running 40 different OpenERP servers on the same machine with identical code sounds like a lot of overkill. OpenERP is designed to be multi-tenant so that many (read: hundreds) of databases can be served from the same server.
Finally I think this is the expected behaviour. The cookies of all websites are stored specifically for each website (for each domain) in the web browser. So if I only change the port the cookies of the first instance are in conflict with the cookies of the other instance because the have the same domain (111.222.33.44 in my example).
So there are some workarounds:
Change Domain Locally
Creating a couple of domain name in my laptop in /etc/hosts:
111.222.33.44 cloud01
111.222.33.44 cloud02
Then the cookies don't interfere with each other anymore. To access to each instance
http://cloud01:3333
http://cloud02:4444
Broswer Extension. Multilogin or Multiaccount
There is another workaround. If I use this chromium extension the problem disappears because the sessions are treated separately:
SessionBox
I've got a web app deployed on Dotcloud where the data on each page can be quite expensive to calculate (many seconds). I want to make initial page loads as speedy as possible by returning cached information and then hitting the server with a bunch of AJAX requests that cause the full calculations to occur. But I don't want these AJAX reqeusts to jam up initial page loads for other users, so I want them queuing separately.
I'm thinking the same Django app should be used for both servers, especially because the data model is shared. So the dotcloud.yml file would like kind of like:
www:
type: python
www-ajax:
type: python
(...)
But how can I route different URLs to each class of instances? Also, I've read about Gunicorn for long requests. These AJAX requests are long, but they don't depend on external resources, besides the DB. Is this a situation for Gunicorn, and if so, is there an easy way to integrate it into the config?
If you set it up the way you are describing in your example dotcloud.yml file, you will have two different services, with two different urls. So if you want to send stuff to the ajax service, you use the ajax url, if you want the regular one you can use the www url.
To run gunicorn you could use the python-worker user and allocate an http port for the python worker, and then have gunicorn listed on the http port. It is important to note that the python-worker doesn't have nginx in front of it like the python service, so gunicorn will need to be the one listening for the traffic directly.
So to put it together it would look something like this.
www:
type: python
approot: myapp
www-ajax:
type: python-worker
approot: myapp
ports:
www: tcp
process: gunicorn -b 0.0.0.0:$PORT_WWW yourapp:app
Your process string will most likely look different but you get the picture.
You also don't need the approot, just put it there as an example.
I want to create a localhost-only API in Django and I'm trying to find a way to restrict the access to a view only from the server itself (localhost)? I've tried using:
'HTTP_HOST',
'HTTP_X_FORWARDED_FOR',
'REMOTE_ADDR',
'SERVER_ADDR'
but with no luck.
Is there any other way?
You could configure your webserver (Apache, Nginx etc) to bind only to localhost.
This would work well if you want to restrict access to all views, but if you want to allow access to some views from remote users, then you'd have to configure a second Django project.
The problem is a bit more complex than just checking a variable. To identify the client IP address, you'll need
request.META['REMOTE_ADDR'] -- The IP address of the client.
and then to compare it with the request.get_host(). But you might take into account that the server might be started on 0.0.0.0:80, so then you'll probably need to do:
import socket
socket.gethostbyaddr(request.META['REMOTE_ADDR'])
and to compare this with let's say
socket.gethostbyaddr("127.0.0.1")
But you'll need to process lots of edge-cases with these headers and values.
A much simpler approach could be to have a reverse proxy in front of your app, that sends let's say some custom_header like X_SOURCE=internet. Then you can setup the traffic from internet to goes through the proxy, while the local traffic(in your local network) to go directly to the web server. So then if you want to have access to a specific view only from your local network, just check this header:
if 'X_SOURCE' in request.META:
# request is coming from internet, and not local network....
else:
# presumably we have a local request...
But again - this is the 'firewall approach', and it will require a some more setup, and to be sure that there is no possible access to the app from outside, that doesn't go through the reverse proxy..