Suggestions for hosting SPA on AWS - amazon-web-services

I have an application that has an Angular Frontend and a Django Backend. I've already set up my django application to run on Elastic beanstalk, however, I am unsure what I should do to serve static files. I'd rather not handle this within the django application.
I have tried using nginx reverse proxy with elastic beanstalk to properly serve files, however I'm unable to serve them on "/", only extensions like "/index" or "/dashboard", and the js files the index.html needs aren't found (404 error).
I thought about rewriting the entire nginx configuration but I'm unsure where to start. Any ideas would be very helpful!!

You can host your Angular frontend on S3 (with website hosting enabled). To make it more performant and cheap, add CloudFront in front of it. Different paths (APIs) of your application can be routed to backend via CloudFront "Behaviours" feature. You can set No-Caching for those dynamic paths.
Reference:
https://medium.com/#peatiscoding/here-is-how-easy-it-is-to-deploy-an-angular-spa-single-page-app-as-a-static-website-using-s3-and-6aa446db38ef

Related

How to deploy a Django + Whitenoise application using gunicorn?

I am using Whitenoise to serve static files in my Django app. I am NOT using Nginx. I plan to use Whitenoise behind a CDN like Cloudfront in the future. See Whitenoise FAQs.
I have been looking for deployment instructions, which handle these questions:
Since I am not using nginx, I plan to bind gunicorn directly to port 80. This results in an error - Permission Denied. I can run gunicorn as root, but that seems like a bad approach.
How to handle SSL certificate stuff? Typically this is handled by servers like Nginx.
EDIT: I am deploying my application on a Ubuntu 18.04 VM on Google Cloud Compute Engine.
P.S.: Mine is not going to be a very high traffic website. Others have used this configuration to serve websites with a high traffic. See this Squeezing every drop of performance out of a Django app on Heroku.
TL;DR
I used nginx as the http server. I removed the configuration associated with static files in nginx, so the static file requests are passed to the wsgi layer (gunicorn) and are handled by Whitenoise. So you can follow any 'nginx + gunicorn + django' deployment instructions/tutorials, which are easily available with a simple google search.
This post cleared it up for me: Deploying your Django static files to AWS (Part 2).
Long Answer
As mentioned before there are many tutorials about deploying Django + Whitenoise applications on Heroku. As pointed out in the comments:
Heroku, which has its own proxy layer in the front end, and is therefore not at all relevant.
Without verifying this statement, I thought this must be true. gunicorn is not a full fledged webserver. In fact gunicorn creators strongly recommend using it behind a proxy server (Eg. Nginx). See docs.
I was confused because I always thought of Nginx as just a reverse proxy. Functioning as a reverse proxy for static assets is just one of the functions of nginx. It provides a lot more features like buffering slow clients, which gunicorn does not, which helps prevent denial-of-service attacks.
I knew this already. It would have been foolish to not use nginx or any other webserver.
Whitenoise is just there to set proper caching headers for static files and enable compression using gzip/brotli. When used with whitenoise.storage.CompressedManifestStaticFilesStorage, it will automatically generate versioned static files. Eg. /static/js/app.49ec9402.js if you have put your file in the template as {%statis%} 'js/app.js'. A versioned file will have max-age set to 10 years i.e. cached forever.
If you are not deploying on Heroku you will still need a web server like Nginx. So you can follow any 'nginx + gunicorn + django' deployment instructions/tutorials, which are easily available with a simple google search. One of which is Deploying your Django static files to AWS (Part 2), which helped me get this issue sorted out.

Strategies for deploying Django App

I have a question that is probably more general than django-related development. The background is quite simple:
I am working on a project whose pages are mostly associated with a web application (this is what I am using Django for). In addition to the app-related pages, however, there are quite a few auxiliary pages (like a landing page, a faq page, a contact page, etc.) that have basically nothing to do with the web app.
What is the standard strategy for deploying such a project? It seems flawed to route requests to these static pages through Django. What seems to make sense is running two servers: one responsible for running the Django app and a a separate server that is responsible for serving the static pages (including, perhaps, the static content used by the app portion of the web site) .
What are some guiding principles that should be employed when making these decisions?
It's not uncommon to run Django side by side with a static site or another CMS.
You would need a front end server to route the request to either the static content or a CMS.
There are two common strategies:
Use URL prefix to determine where to route (e.g. example.com/static/ to static files and example.com/ to Django). You would need a front end server to route the request to either the static content or a web app/CMS written in another framework/language (this is configured with Alias directive in Apache).
Put the application server and static file server on separate domain/subdomain (e.g. static.example.com to static and app.example.com to Django). You can do this by configuring a front end server to serve on a single machine (this is configured with VirtualHost on Apache) or as separate machine. In either case, you'd need to configure the DNS to point to your subdomains to the right machine(s).
The former is simpler to setup, but the latter allows you to scale better.
Servers commonly used for front-ending an application server includes Apache, Nginx, or uWSGI, but pretty much any production-quality web server can do it.
In fact Django's deployment documentation (e.g. Apache) would always instruct you to have your static files served by the front end server even in a Django only installations, as Django weren't designed for efficiently serving static contents unlike the front end web servers.
The django.contrib.staticfiles app is there to make it possible for Django to refer to a static file hosted on a different server and easily switch between serving static contents with Django's built-in server during development but with the front end server on production.

When using offline compression with django-compressor, how can I add support for multiple deployment stages (prod, qa, test) with CDN?

I have a very simple setup for a django application which gets packaged in jenkins where it runs offline compression with django-compressor, runs integration tests and then deployes to a QA server.
Later i would have a job which uses the same package to deploy on our prod server.
This all works good and well as long as resource paths are the same on all three stages - but i would now like to add a CDN for static resources which simply proxies the requests on prod server, which means i need another URL prefix (MEDIA_URL) for production. Currently when i simply change my packaging to point the MEDIA_URL to the CDN my integration tests fail and the QA server is broken, because the CDN proxies the production server.. So i would like to keep using relativ non prefixed paths for integration and customize the URL prefix to the compressed media files with a remote URL.
Is there any sane way to do this? for my integration tests i could imagine i simply create a /etc/hosts entry to point the CDN host name to the test server, but this seems like an ugly workaround.
You should probably set up a separate CDN instance for each environment. I do this using Amazon S3, it's easy to create new buckets for each environment.

routing subfolder on S3 to wordpress blog on EC2 microsoft server

I'm using Amazon S3 to host static files for my website. Now I wanted to add Wordpress blog, which would be hosted in /blog subfolder. I followed an article which explains the easiest way to install Wordpress on EC2 on Microsoft Windows server here:
http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/EC2Win_CreateWordPressBlog.html
I'm wondering how can I set /blog/* subfolder of my static site on S3 to rewrite all urls to a blog hosted on EC2.
How is that possible?
Thanks for help
As far as I know that's not possible. S3 supports redirects, but it won't act as a proxy for your dynamic content as there's no concept of URL "rewriting" (which can only really happen at the webserver level unless you're redirecting to a completely different domain). You have a few alternatives though.
Host your wordpress blog on a subdomain. blog.yourdomain.com keeps your blog and you just link to it from your static site.
Find a way to generate static files from your wordpress blog and put those files in the blog "subfolder". There are wp plugins that will do this I believe.
Just thought of a new one. Set up cloudfront in front of your website and use behaviors to forward requests to your blog server.
There is a final option where you make your default 404 page into a little js application that acts as a router for fetching pages from your wordpress backend but hahahahaha don't do that.

aws, django, unicorn and s3 - do I need nginx then?

I'm building app in django which I want to deploy on aws ec2 server. The app will run on gunicorn, and I want to place static files on s3. So my question is - do I need to use nginx at all?
Is there any other benefit of using nginx beside serving static files?
Arek
Putting nginx in the front of your stack not only allows you to route static content requests to your s3 storage but also give you the ability to do things like caching your django requests and lower the hits in your app and database. You can set up fine grain cache policies and have more control of exactly where requests will go, all while still under the same url structure as your set up in django.
Even though you're placing static files on S3, you still need a web server to serve them, right? I don't see how S3 changes the fact that with Apache/WSGI or gunicorn it's better to have something like nginx serving static files.
Also, read this: http://gunicorn.org/deploy.html