I have two EC2 instances, which one has main Django-based backend app and another one has separate React-based frontend app using some of backend data. The backend app has its own views and endpoints and some frontend separate from the app in another EC2 instance. I got a new domain in AWS Route 53 which redirects to the frontend app and its location is configured on nginx. The missing thing are session cookies and csrftoken from the backend to make use of database. What may be useful is the frontend app is already reachable from the redirect set up in Django view but from the base server.
In the example code below I don't paste the whole config but just the concept what I want to reach.
Is there a way to redirect domain traffic to frontend app but through the backend to retrieve the cookies required to reach the backend data? Or should I think about other solution to make use of the domain?
server {
... some server data ...
$backend = "aws-backend-dns";
$frontend = "aws-frontend-dns";
location #backend {
proxy_pass http://$backend;
return #frontend; # but with some magic to keep the cookies!
}
location #frontend {
proxy_pass http://$frontend; # and I have backend cookies here
}
}
Related
Let's assume I own the domain:
foobar.com/
I want to use Netlify for my static pages like
foobar.com/features/
foobar.com/pricing/
foobar.com/imprint/
etc.
But I also have a Django application on a seperate server on AWS that has nothing to do with the static sites served by Netlify.
My Django has urls like
foobar.com/login/
foobar.com/dashboard/
etc.
Is it possible to use Netlify for a few static pages and my Django application for other pages?
I don't know how start or if this is event possible.
It will depend on how your Django apps handle the target, but you could use rewrites on Netlify using the HTTP status code 200 with a redirect rule (rewrite).
If the API supports standard HTTP caching mechanisms like Etags or Last-Modified headers, the responses will even get cached by CDN nodes.
Have DNS set foobar.com to the Netlify site.
Decide the domain for the Django site on AWS. (proxy.foobar.com)
Setup _redirects at the root of the Netlify site to use Proxy (rewrites) on Netlify
/login/* https://proxy.foobar.com/login/:splat 200
/dashboard/* https://proxy.foobar.com/dashboard/:splat 200
Note: This is how you can incrementally switch a site over to Netlify without having to refactor a site all at once.
When you set a DNS record (e.g. an A record), you can point foobar.com to your AWS server or netlify, but not both.
Perhaps you can put the sites on different domains, for example dashboard.foobar.com for your Django site.
You could then configure netlify to redirect foobar.com/dashboard/ to dashboard.foobar.com/dashboard/
No, you can't have a foobar.com point to two different servers. You'll have to use subdomains, e.g.:
static.foobar.com -> DNS entry for Netlify
app.foobar.com -> DNS entry for your Django server
or, what you often see, is:
foobar.com and www.foobar.com -> DNS pointing to your main website (Netlify)
api.foobar.com and app.foobar.com -> DNS pointing to your Django app
I'm trying to develop a PWA for our sites. In production and staging, we serve everything from one domain. However, in development on a local machine we serve HTML from one port using Django server eg
http://localhost:8000
And the assets (including JS) using Grunt server from another port:
http://localhost:8001
The problem is that the scope of the service workers is therefore only limited to assets, which is useless, I want to offline-cache pages on the 8000-port origin.
I have somewhat been able to go around this by serving the service worker as a custom view in Django:
# urls.py
url(r'^(?P<scope>.*)sw\.js', service_worker_handler)
# views.py
def service_worker_handler(request, scope='/'):
return HttpResponse(render_to_string('assets/sw.js', {
'scope': scope,
}), content_type="application/x-javascript")
However, I do not think this is a good solution. This code sets up custom routing rules which are not necessary for production at all.
What I'm looking for is a local fix using a proxy, or something else that would let me serve the service worker with grunt like all the other assets.
I believe this resource can be of help to you: https://googlechrome.github.io/samples/service-worker/foreign-fetch/
Basically you host the Service worker on the port 8001 and the server handles it as shown in the example.
Then you fetch it from there.
The Foreign fetch is described more in details here: https://developers.google.com/web/updates/2016/09/foreign-fetch
Here is my setup.
Public site hosted by squarespace.com (www.example-domain.com)
Web application (AWS EC2/ELB), i would like to be available via the same domain. (my.example-domain.com)
Custom profile pages available as www.example-domain.com/username
My question is how can i setup the DNS to achieve this? If can't do it just through DNS, any suggestions? The problem i am facing is that if squarespace.com is handling the www.example-domain.com traffic how can i have it only partially handle it for certain urls. Maybe i am going about this in the wrong was all together though.
The two first are ok. As you mention, (1) is not compatible with (3) for a pure DNS config as www of example-domain.com has to be configured to a single end-point.
Some ideas of non-DNS workaround:
Having the squarespace.com domain on sqsp.example-domain.com and configure your www domain to a custom web server on which you configure the root (/) to redirect (HTTP 300) to sqsp.example-domain.com. It will be quite transparent for the user, except in his browser address.
The same but setting on / a full page HTML iframe containing sqsp.example-domain.com.
The iframe approach is a "less clean", Google the solutions to build your opinion.
EDIT:
As #mike-ryan mentioned, there is the proxy solution as well where you configure you web server to request another server to get the content to return to your user. If you are already using AWS, a smart way to do this is to use CloudFront: you can setup CloudFront to proxy one server on one URL and proxy another server on other URL. Actually, this is maybe the faster to way to implement you need. Of course, a proxy is one more "hop", so it may add more delay.
If you really want to have content served from different servers while only using a single domain name, you'll need to set up a proxy server to handle the request routing for you. I am assuming your custom profile pages must be served from your EC2 instance.
Nginx will receive all requests, and will then decide whether they should be sent to Square Space or your web app. Requests will be reverse proxied to Square Space or to your app, depending on the URL.
This is similar to #smad's answer, except it will all be invisible to the users which IMHO is better than redirecting the user to a new domain name.
Example steps:
Set up an Nginx server, create two virtual hosts - one for my.example.com, and one for www.example.com
Create two upstreams in your Nginx config - one for Square Space, and one for your app
Configure the www.example.com virtual host to reverse proxy connections to the Square Space upstream, if the URL is "/". Otherwise, traffic should be proxied to your app upstream [0]
Configure the my.example.com virtual host to proxy all traffic to your app upstream
[0] how to reverse proxy via nginx a specific url?
To tackle expanding server load I have came up with a solution to render plain HTML site for visitors of my site. So the visitors are routed to plain html site whereas registered users are routed to dynamic site.
To separate the two different user groups I have used cookies. The load balancer and web server I'm using is nginx.
The nginx conf looks like this:
set $cookie_set 0;
if ($http_cookie ~ 'mysite.com') {
set $cookie_set 1;
}
location / {
if ($cookie_set ~ 0) {
proxy_pass http://static-site;
}
if ($cookie_set ~ 1) {
proxy_pass http://dynamic-site;
}
}
The mentioned strategy works, but it's not bulletproof. There are some situations where this doesn't work e.g. with browsers that don't support cookies and falsely created cookies.
There must be more sophisticated strategy of doing this. Any experiences, comments and ideas are welcome.
This is where a real load-balancer, such as HAProxy is needed.
with HAProxy, you can route traffic based on current backend farm capacity.
The configuration sniplet below forward the traffic to the static backend when the dynamic one start queueing.
frontend
use_backend bk_static if { queue(bk_dynamic) ge 1 }
default_backend bk_dynamic
backend bk_static
backend bk_dynamic
The same type of configuration could apply to a number of running sessions on bk_dynamic backend.
Baptiste
I have multiple Django projects running on one server using gunicorn and nginx. Currently they are each configured to run on a unique port of the same IP address using the server directive in nginx. All this works fine.
...
server {
listen 81;
server_name my.ip.x.x;
... #static hosting and reverse proxy to site1
}
server {
listen 84;
server_name my.ip.x.x;
... #static hosting and reverse proxy to site2
}
...
I came across a problem when I had 2 different projects open in 2 tabs and I realized that I could not be logged into both sites at once (both use the built-in Django User model and auth). Upon inspecting the cookies saved in my browser, I realized that the cookie is bound to just the domain name (in my case just an ip address) and it does not include the port.
On the second site, I tried changing SESSION_COOKIE_NAME annd SESSION_COOKIE_DOMAIN, but it doesn't seem to be working and with these current settings I can't even log in.
SESSION_COOKIE_DOMAIN = 'my.ip.x.x:84' #solution is to leave this as default
SESSION_COOKIE_NAME = 'site2' #just using this works
SESSION_COOKIE_PATH = '/' #solution is to leave this as default
#site1 is using all default values for these
What do I need to do to get cookies for both sites working independently?
Just change the SESSION_COOKIE_NAME. The SESSION_COOKIE_DOMAIN doesn't support port numbers afaik. So they are all the same for your apps.
Another solution that doesn't require hard-coding different cookie names for each site is to write a middleware that changes the cookie name based on the port the request came in on.
Here's a simple version (just a few lines of code).