SSL certificate for the ip adress [Nginix+django server] - django

I have nginix+django server on google cloud virtual machine which is running at a specific port(8080). I am able to access the service by http://external_ip:8080. But I'm not able to access it over "https". I dont have a domain name. For our application it is not necessary as it is just a rest api to perform some tasks. I am relatively new to these terms like ssl certificate, domain name, nginix ... etc. It would be great if someone can help me out. Thanks in advance.

Two paths:
Configure Nginx to serve on 443 with TLS. Configure GCP firewall to allow for https with tags.
With tags, configure FW rules for the instance to only serve 8080 to GCP Load Balancers and have HTTP(S) Load Balancing serve the content via TLS to the public.
In any case you'll have annoying TLS issues without a DNS name - so you should get one. You should alternatively look into serving Django from App Engine Standard.

Related

Best way to implement HTTPS for API hosted on AWS ec2 machine

First of all, I'm in no way an expert at security or networking, so any advice would be appreciated.
I'm developing an IOS app that communicates with an API hosted on an AWS EC2 linux machine.
The API is deployed using **FastAPI + Docker**.
Currently, I'm able to communicate with my remote API using HTTP requests to my server's public IP address (after opening port 80 for TCP) and transfer data between the client and my server.
One of my app's features requires sending a private cookie from the client to the server.
Since having the cookie allows potential attackers to make requests on behalf of the client, I intend to transfer the cookie securely with HTTPS.
I have several questions:
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
The FastAPI "Deploy with Docker" docs recommend this article for implementing TLS for the server (using Docker Swarm Mode and Traefik).Is that guide relevant for my use-case?
In that article, it says Define a server name using a subdomain of a domain you own. Do I really need to own a domain to implement HTTPS? Can't I just keep using the server's IP address to communicate with it?
Thanks!
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
With HTTP all traffic between your clients and the ec2 is in plain text. With HTTPS the traffic is encrypted, so it is secure.
FastAPI "Deploy with Docker"
Sadly can't comment on the article.
Do I really need to own a domain to implement HTTPS?
Yes. The SSL certificates can only be registered for domains that you own. You can't get the certificate for domain that is not yours.

SSL cert for AWS domain?

I have a backend service i'm running in Fargate. I need this service to have an SSL cert on its load balancer so that it can talk to other HTTPS services. I've created the load balancer and it gives me an AWS domain (my-cool-app.us-east-1.elb.amazonaws.com).
Now, when I try to request a certificate through acm, it fails and says "Additional verification required". So i'm not sure if it's possible to add an SSL cert to this load balancer without registering a custom domain?
Also, this is a Django app and I haven't done anything other than keep it as runserver which I know isn't good for production but I just need to start by making it work as a dev environment. Do I need to change the way Django runs in order for SSL to work? Or is the load balancer sufficient?
To use an SSL for a domain you need to have control over that domain. For the AWS managed certificate service (ACM) you can verify through either DNS validation or email validation both of which you must essentially have domain control to validate.
As you're trying to use ACM for a AWS owned domain, someone from AWS would need to approve the SSL (which they won't).
Regarding your second point what you're describing is SSL offloading, in which the load balancer will serve HTTPS and then terminate encryption in transit. It will then forward the request to the Fargate container using the protocol and port defined in the target group.
The only thing you want to consider is how you display to the user, for example ensure that all CSS, JS ans links on your site are HTTPS. You can detect whether the incoming request used HTTPS at the load balancer by inspecting the X-Forwarded-Proto header in your application.

Hosting React page on S3 and making REST api calls to server on Elastic Beanstalk

Background
I am trying to deploy a dummy application with React frontend and Django backend interacting via REST api. I have done the following:
Use a S3 bucket to host static website and deploy my react code to it
Put Cloudfront for S3 bucket - set up certificate and changed my domain name (from GoDaddy) to link to this address
Kicked off Elastic Beanstalk environment following the python environment tutorial of AWS
Set up Postgres RDS and linked the Django server with it
So now I can do the following
Access my frontend using https via my domain name (https://www.example.com)
Access django admin site using the path of elastic beanstalk and update items
i.e. each component is up and running
Problem
I am having trouble with:
Making a secure REST API call from the static page to Elastic Beanstalk environment. Before I set up certificates I could easily make REST API calls.
The guides I can find usually involve putting a domain name for Elastic Beanstalk, which I imagine does not apply to my case (or does it?)
I tried to follow this faq and updated configuration in load balancer that accepts 443 https and redirects to 80 http. But I am using same certificate as from CloudFront, which does not sound right to me.
Would appreciate help with
how to solve the above ssl connection issue
or is there a better architecture for what I'm trying to achieve here?
According to Request a certificate in ACM for Elastic Beanstalk backend, it sounds like I have to use a subdomain and request a certificate for that subdomain, and use Cloud 53 to direct requests to that subdomain to Elastic Beanstalk environment. Would that be the case?
Thank you in advance!
By default EB url will HTTP only. To use HTTPS you need to deploy SSL certificate on your ALB.
In order to do that you need a custom domain, because you can only associated an SSL certificates with domains that you control. Thus, normally you would get a domain (you seem to already have one from godaday). So in this case you can setup a subdomain (e.g. api.my-domian.com) on godady. Then you can use AWS ACM to register a free public SSL certificate for api.my-domian.com.
Once the certificate is verified, using either DNS (easier) or email technique, you deploy it on your ALB using HTTPs listener. Obviously you will need to point api.my-domian.com to the EB's https url. You can also redirect on your ALB http traffic from port 80 to 443 to always use https.
Then in your front-end application you only use https://api.my-domian.com, not the original EB url.
There can be also CORS issues alongside this, so have to be vary of them as well.

Gatsby site serving on EC2 with pm2 node with aws classic load balancer needs https

I am running a Gatsby site in development mode as a dev server on EC2 with a loadbalancer pointing from port 80 to 8000. I have setup a cname on my domain dns to point to the load balancer this works fine. However I need to display this page as an iframe in sanity.io as a web preview and it requires https.
I've read through this https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html and most of it is pretty straight forward for the most part.
What I have done so far is created a listener for 443 https on the loadbalancer and added https 443 to the security group. i have succsufully issued a certificate to the subdomain I am using with aws and attached it to the loadbalancer listener.
Gatsby has a article about custom certs for development mode here https://www.gatsbyjs.org/docs/local-https/#custom-key-and-certificate-files What I am looking for is the cert file, the authority file and the key file in order to pass this command below
Where in the aws certificate manager do I find these files. I think that is the last piece I need to get https working, correct me if I am wrong.
thanks ahead of time.
gatsby develop --https --key-file ../relative/path/to/key.key --cert-file ../relative/path/to/cert.crt --ca-file ../relative/path/to/ca.crt
This is the process I used to request my certficate and it says it's issued
https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html
But how do I use it with the custom https command with gatsby?
There is a export option but it says only for private keys. Do I need to create a private key and then I can export these files I need?
Do I even need to run https on gatsby's side. I watched a video using apache and no change was made to the apache server to get https working with the loadbalancer.
Here is a screenshot of my loadbalancer listenr
Here is a image of my security groups
If I run the --https for gatsby develop it breaks my site I can no longer visit it via the loadbalancer or port 8000. So not sure what to do here.
I would suggest not to encrypt the connection between your ELB and the EC2 instances. If your EC2 instances are not publicly reachable, but only through the load balancer instead, it is best practice to terminate the SSL connection on the load balancer. No need to encrypt HTTP requests inside an AWS VPC (i.e. between ELB and target instances).
You can create a load balancer that listens on both the HTTP (80) and HTTPS (443) ports. If you specify that the HTTPS listener sends requests to the instances on port 80, the load balancer terminates the requests and communication from the load balancer to the instances is not encrypted. [1]
There is some discussion (e.g. on the blog of Kevin Burke) whether it is necessary to encrypt traffic inside a VPC. [2] However, most people are probably not doing it.
What it means for you: Use the same instance protocol for your targets as before: HTTP via port 8000 for both listeners. Do not set up SSL for your Gatsby service. Use a plain HTTP server config instead. No changes are necessary to ELB targets when using SSL termination on the load balancer.
References
[1] https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html
[2] https://acloud.guru/forums/aws-certified-security-specialty/discussion/-Ld2pfsORD6ns5dDK5Y7/tlsssl-termination?answer=-LecNy4QX6fviP_ryd7x

AWS Install SSL from Certificate Manager (Free from AWS) to ELB and apply to EC2 Windows Platform IIS Instance

Greeting
I have created the Certificate through Certificate Manager in AWS, the free one. And successfully verified as well as put it in the Elastic Load Balancer (ELB). The status of the certificate shows it's issued and Is Used? shows Yes in the Certificate Manager.
Overall, I have completed these two steps without any problem, but the SSL does not work with my domain name. When I type "mydomain.com" with or without prefix http://, it works, but when I type "mydomain.com" with https:// prefix, it does not work
I have researched to find the solution and a way to install SSL into Microsoft Windows IIS on AWS, but no document describes about that.
Can anyone share this experience? I really appreciate
Looking forward for the reply and thanks
You do not need to setup SSL on your web server when you use a load balancer. Assign the SSL certificate to the load balancer (as you did). Then in your HTTPS listener in the load balancer listen on HTTPS, but connect to your web server over HTTP.
In the Amazon Console for your load balancer under the "Listeners" tab, the "Load Balancer Protocol" will be HTTPS and the "Instance Protocol" will be HTTP.
This has the benefit of offloading SSL to the load balancer which decreases CPU load on your web server.
If you do want to setup SSL on your web server, then you cannot use the Amazon SSL certificate. You will need to use the standard methods and purchase a certificate from someone else.