How do you serve ember-cli from https://localhost:4200 in development - ember.js

For our authentication to work with our ember app we need to serve the app from a secure url. We have a self signed ssl cert.
How do I setup the ember-cli to serve the index.html form a https domain.
Cheers

Also see https://stackoverflow.com/a/30574934/1392763.
If you will always use SSL you can set "ssl": true in the .ember-cli file for your project which will result in the ember serve command using SSL by default without having to pass the command line flag every time.
By default ember-cli will look in an ssl folder in the root of your project for server.key and server.crt files but you can customize that as well with the --ssl-key and --ssl-cert options to provide an alternate path.
If you don't already have a self signed SSL certificate for development you can follow these instructions to easily generate one: https://devcenter.heroku.com/articles/ssl-certificate-self
Example .ember-cli:
{
"disableAnalytics": false,
// Use SSL for development server by default
"ssl": true,
"ssl-key": "path/to/server.key",
"ssl-cert": "path/to/server.crt"
}

EDIT
For googlers, this is no longer true. Use ember-cli --ssl
Thx to xdumaine Jul 12 at 10:08***
emphasized textYou can't directly from ember-cli without putting your hand in the code which I don't recommend :)
If you want to go this way look at: node_modules/ember-cli/lib/tasks/server/express-server.js and may be also into node_modules/ember-cli/lib/tasks/server/livereload-server.js
For those who still want to go through a web server :
However there are other cleaner solutions, for example use nginx as a (reverse) proxy :) or ever serving directly from nginx on the /dist folder :)
Reverse basic example with nginx (didn't tried with ssl but should theoretically work :p) :
server {
listen 443;
server_name *.example.com;
ssl on;
ssl_certificate /path/to/your/certificate.crt;
ssl_certificate_key /path/to/your/key.key;
location / {
proxy_pass http://localhost:4200;
}
}
I said nginx but actually any webserver can do the trick right :)
NaB DO NOT USE ember serve IN PRODUCTION

I use the tunnels gem with pow port-proxying.
Update: more detail
Using a real web server (like the previous answer with nginx) is a great way to go, and is probably more like your production setup. However, I manage a lot of different projects, and am not that interested in managing an nginx configuration file for all of my projects. Pow makes it easy to make a lot of different projects available on port 80 on one development machine.
Pow has two main modes. The primary function is to be a simple server for Rack applications, accessed via a custom local domain such as http://my-application.dev/. This is done by symlinking ~/.pow/my-application to a directory that contains a rack application. However, pow can also proxy requests to a custom local domain to a specified port by creating a file that contains only the port number (such as echo 4200 > ~/.pow/my-application). This makes it easy to develop locally with an actual domain (also, as a side note, subdomains work too, which is really handy; for example, foobar.my-application.dev will also route to my-application).
Tunnels makes it easy to use pow with https.
Setup
# Install pow
curl get.pow.cx | sh
# Set up pow proxy for your ember app
echo 4200 > ~/.pow/my-application
# Start your ember server
ember serve # specify a port here if you used something else for pow proxy
# Check that http://my-application.dev correctly shows your ember app in the browser
# Install tunnels
gem install tunnels # possibly with sudo depending on your ruby setup
# Start tunnels
sudo tunnels
# Now https://my-application.dev should work

Related

Plausible analytics on a server with a webapp

I have Django hosted with Nginx on DigitalOcean. Now I want to install Plausible Analytics. How do I do this? How do I change the Nginx config to get to the Plausible dashboard with mydomain/plausible for example?
Setup plausible by either running the software directly or in a docker container - let's say it runs on port 8080
Then in your nginx.conf - you should have a server block for your domain
Within that add a location block with the path you want plausible on and add a proxy pass directive to forward the requests to localhost:8080
Monitor access.log and error.log to debug any issues that may happen

NGINX Docker on Server with pre-existing NGINX on Ubuntu Server

I am currently running into an issue with one of my projects that will be running in Docker on my Ubuntu Server with a NGINX docker container to manage the reverse proxy for the Django Project. My issue I am running into is I already have previous Django projects running on that particular Ubuntu server so port 80 is already being used by a NGINX block running on the actual server.
Is there a workaround to running my Docker NGINX as well as the Ubuntu NGINX and have my docker image run as a "add on" site because the Django sites hosted there are clients websites, so I would prefer to not interfere with them if I dont have to.
My project needs HTTPS because it is serving data to a React-Native app running on Android APK 28 which for some reason has a security rule that blocks non HTTPS connections from happening in the app. If anyone else has run into an issue like this I would gladly appreciate the advice on how to tackle this issue.
I have tried running NGINX in Docker with port 81 instead of port 80 and that works perfectly, but I dont think there is a way to make a secure connection to port 81 is there?
Thanks in advance.
You can't just mess with default HTTP ports for endpoints - user browsers use 80 and 443 by default. If you change those, your users would have to connect to your.server.com:81 or something similar. Nobody would do that for a public server, but this can be an option for a private one.
I think a reasonable way out of this will be to use host's NGINX to proxy requests into Docker's NGINX (if there is sense in keeping it at all). You can handle HTTPS termination on host's NGINX and pass plain HTTP into Docker's one.
Another adequate option is to use another server, so that everything works with no dirty hacking involved.

mac: simplest way to safely serve a django app on port 80 for development

I'd like to open up my django app to other machines in the office during development.
I understand that it's a bad idea to run the django development server as root. The recommended way to serve a django app on port 80, even during development, appears to be django, plus gunicorn, plus nginx. This seems super complicated to me. I got the first two steps working, but am now staring at nginx in utter confusion. There's no mac build on the site. Do I really have to build it from the source?
One alternative I've come across is localtunnel. But this seems sketchy to me, and involves setting up public keys and whatnot. Is there any simpler way to serve a django app on a mac from port 80 without running it as root?
Also, just what are the risks of running a django development server on port 80 as root, vs not as root? What are the chances that someone could, say, gain total access to my file system? And, given the default user settings on a mac, is this more likely if I'm running my django dev server as root than if I'm running it as not-root?
Since you mentioned you don't want to run the Django server as root and you are on a mac, you could forward traffic from port 80 to port 8000:
sudo ipfw add 100 fwd 127.0.0.1,8000 tcp from any to any 80 in
and then run the Django server as a normal user (by default it serves on port 8000)
./manage.py runserver
To remove the port forwarding, run:
sudo ipfw flush

How can I serve a Django application using the SPDY protocol?

What is the best way to serve a Django application over the SPDY [1] protocol?
[1] http://www.chromium.org/spdy
One way is to run Django on Jython with Jetty - http://www.evonove.it/blog/en/2012/12/28/django-jetty-spdy-blazing-fast/
Also, apparently nginx has some draft module for SPDY
It works with nginx > 1.5.10 and Django run as fastcgi server.
Recent versions of Chrome and Firefox dropped support for SPDY v2. So you need at least SPDY3 support on the server side. Nginx versions higher than 1.5.10 support version 3 of the protocol.
Django Mainline Installation
Currently (as of Feb 2014) Nginx > 1.5.10 is only available from the mainline branch, not from stable. On most Linux distributions, it is easiest to install mainline packages provided by the nginx project.
Nginx and Django configuration
The Django documentation explains how to run Django with Nginx through fastcgi. The configuration that is provided there can be used as a starting point.
In addition, you need SSL certificates for your host and extend the Nginx configuration in the following ways:
The listen configuration options needs to be modified to:
from listen 80; to listen 443 ssl spdy;.
You need to add basic ssl configuration options, most importantly a certificate and key.
So, both modifications combined, the configuration may look as follows:
server {
listen 443 ssl spdy;
server_name yourhost.example.com;
ssl_certificate <yourhostscertificate>.pem;
ssl_certificate_key <yourhostskey>.key;
ssl_prefer_server_ciphers on;
location / {
include fastcgi_params;
fastcgi_pass 127.0.0.1:8080;
}
}
Then run your Django as in fastcgi mode, as follows:
python ./manage.py runfcgi host=127.0.0.1 port=8080
Testing your setup
Point your browser to https://yourhost.example.com
You should be able to verify that the connection is done via SPDY in:
Chrome: Look for an active SPDY session in chrome://net-internals/#spdy
Firefox: Check the Firebug Network tab and look for the X-Firefox-Spdy:"3.1" response header.

Django app running in EC2, but trying to visit elastic URL returns page not found

I'm just starting out with EC2, and I've pulled down a git repo that I started on my local machine and so I know that it works running the server from there, and it seems to works when I run my server from the EC2 instance I have running, but for some reason, when I visit the elastic IP address of that instance I get a page-not-found. Any idea on why that might be?
So, I've now started using nginx, and made a conf file following the instructions here: https://code.djangoproject.com/wiki/DjangoAndNginx that is as follows:
server {
listen 80;
server_name ec2-54-242-149-154.compute-1.amazonaws.com;
access_log /var/log/nginx/USBag.access.log;
error_log /var/log/nginx/USBag.error.log;
location /basicMap/ {
alias /home/www/ec2-54-242-149-154.compute-1.amazonaws.com/basicMap/;
expires 30d;
}
location / {
include fastcgi_params;
fastcgi_pass 127.0.0.1:8080;
}
}
basicMap is a place that I have already defined in my django app, and the linked ec2 ip is the one my server is running on. I am having a lot of difficulty finding documentation on how to proceed or how to determine if my conf file is correct or not. Using the standard python manage.py runserver doesn't work however. Advice on how to proceed?
There is a lot of info about setting up a production django server out there, and I'll give you my personal preferences below, but before all that let's backup and see if we can just get any response from the production server.
To start the development server on your EC2 instance run:
manage.py runserver 0.0.0.0:8000
That command will cause runserver to bind to all interfaces and serve files to the external world. You'll never want to do this outside of development, but it is a good way just to test if your django app is setup before complicating things. Now try hitting your EC2 instance and see if you get a response.
If that's still not working, make sure you allow incoming connections to the server's port (8000 in the command above, 80 once live). You could test that you have ports open using netcat (nc -l).
Once you are satisfied that you have your app setup, I'd recommend you use nginx as your front end webserver and gunicorn as your django webserver in production. You'll likely want to look into setting up a virtualenv, supervisord etc for your production setup (here is a tutorial: http://senko.net/en/django-nginx-gunicorn/), but all that depends on the specifics of your project.