I'm new to AWS and I'm in the process of deploying an app there. I already hosted my frontend in S3 and cloudfront and generated a certificate in order to serve my frontend with https. Now I need to provide access to my backend. I already created the proper structure in EC2 and I can even retrieve info from my backend through simple http. The problem is that once my frontend is https and my backend is http it refuses to receive info thowing an error for "mixed content".
I already read a lot of articles from AWS and yet I'm confused on how to implement https in ec2.
I've created load balancers, VPC and etc, but I really can't make it work.
If anyone can help me with , I'd be thankful!
Related
Dear Stackoverflow community. This question has been asked before, but my question is little bit different.
So I am using Elasticbeanstalk to deploy my Django Backend and RDS for database (PostgreSQL).
EB generated a link for my backend --> http://XXXXX.region.elasticbeans.com. The issue is that when I send a request from the frontend side (HTTPS), it gives a "Blocked loading mixed active content" error, which comes from HTTPS to HTTP request. As far as I am concerned I have to change configuration of the Load Balancer of my EC2 instance and add redirection. In order to successfully do that I am required to have a SSL certificate. However, when I use ACM (Certificate Manager) in order to generate one using the exact same link for the backend, it automatically rejects my request.
So my question is that what is the exact process of obtaining the SSL cert. for the default EB link, or maybe there are easier ways to redirect HTTP to HTTPS from the AWS console?
Regards.
So my question is that what is the exact process of obtaining the SSL cert. for the default EB link,
There is no process as this is not possible. You need to have your own domain (e.g. myapp.com). Only then you can setup SSL using ACM. Once you have your own domain, the full process of setting up https on EB is in AWS docs.
How can I send https request from one deployment to another deployment using AWS lightsail's private domain?
I've created two AWS Lightsail Container deployments using two docker images. I'd like to send https request from one image deployment ("sender") to another image deployment ("receiver"). This works fine when the receiver's public endpoint is enabled. However, I don't want to expose this service to the public but instead route traffic using AWS Lightsail's private domain.
My problem is when I try and send https request from "sender" to the "receiver"'s private domain (.service.local:) I get https://<service_name>.service.local:52020/tester/status net::ERR_NAME_NOT_RESOLVED on the "sender"'s html page. According to the Lightsail docs (section "Private domain") this should be accessible to my "Lightsail resources in the same AWS Region as your service".
I've found a similar Question & Answer in stackoverflow. I tried this answer using my region but failed because Lightsail container required https while .service.local required http. After creating a Amazon Linux instance, I succeeded making http request but failed to make https request. (screenshot below). In the meantime, Lightsail strictly asks you to use https.
If I force to send http request from https webpage, chrome generates Mixed content: The page at ... was loaded over HTTPS but requested an insecure ... error. I can go around the https problem by using next.js api routes, but this doesn't feel secure because next.js api routes are publicly accessible.
Is there anything that I may be missing here?
Things I've verified:
The image is up and running and works fine when connecting to it using the public domain
I'm running both instance and container service in the same region
Thank you in advance.
Some screenshots
Running dig inside docker's entrypoint script
Error message when sender sends http request to receiver
I made my two AWS Lightsail Containers, Frontend Container with next.js and Backend Container with flask, talk to each other using the following steps:
Launch a Lightsail "instance" using "Amazon Linux" in the region I want to deploy my Container. Copy /etc/resolv.conf from this "Amazon Linux" instance. Update Dockerfile to overwrite /etc/resolv.conf file in my docker.
To make API request using http instead of https and go around the Mixed content: The page at ... was loaded over HTTPS but requested an insecure ... error, I used next.js' API route to relay the API request. So, for instance, a page on Frontend Container will make API request to /api on the same Container and the /api route will make http request to Backend Container.
API route was properly coded with security measures so that users cannot use API route to access random endpoint in Backend Container.
https "pages" are often mixed content where resources such as pictures are drawn from the http folders not the "https" site folder, hence the request to get such a resource is http because of its configuration by location, so it will be called by http to obtain and then not be crypted (see server configuration for https folder location that requires access to it by that protocol).
Of protocols, the message from another post implies and may be that to communicate "privately" is NOT a web service for public so such communications require using ssl:// secure protocol (alike using ssh://) NOT https:// secure public web server protocol of both require certificate. (hazard a guess)
ssl may be what is used privately across local.
The following AWS links recommend having differnet accounts for developing and the service.
https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
https://aws.amazon.com/cli/
I'm working on this site that I'm hosting with AWS. I'm hosting a vue.js frontend in an S3 bucket and the backend utilizes Spring Boot hosted with Elastic Beanstalk and a MySQL RDS instance. After playing around some, I got the frontend to serve up files via HTTPS, but now my requests to the Spring API are failing.
I've done a lot of digging on this and it seems that are may be several ways to handle this, but I just keep getting stuck and not knowing where to turn next. I've tried playing around with setting up a load balancer, and also tried configuring a proxy in a .ebextensions configuration file.
This whole thing was working when I set it up with HTTP originally, but now that the front-end is serving up HTTPS it won't work.
Web browsers must be blocking your mixed HTTP/HTTPs content because of their inbuilt security. You need to make sure that you setup the whole site using HTTPs or HTTP. As you have already set up the S3 content to be served through HTTPs, now you must configure your Elastic Beanstalk environment to be setup with HTTPs too. Here is the link to help you with that
Configuring HTTPS for Your Elastic Beanstalk Environment
If your site is built with a CRM? (WordPress/Joomla/ect.) then there are plugins/extensions that handle that. I had a similar situation with a WordPress site, and used the plug-in called "SSL Insecure Content Fixer". Worked without a hitch, rather than scanning through the entire site for mixed HTTP/HTTPs content.
What is my indication that I am using AWS Certificate Manager correctly and that any remaining problems getting my site to load at https are due to a mistake I am making in my Apache configuration?
In AWS Certificate Manager, I see "Success! Your certificate was issued successfully." Does that mean there are no further steps for me to complete in the AWS console, and I need only get my Apache configuration correct to finish?
Currently, when I try to visit a URL at my site with the http protocol, it loads fine, but when I visit at https, the browser tries to load the page but it never loads.
I have followed the instructions for creating an HTTPS listener, but still do not know if I am done with all necessary steps in AWS console. How would I know?
Edit: To clarify, I am using an Elastic Load Balancer (ELB), since the documentation indicated I need to use ELB with AWS Certificate Manager (ACM). However, I do not know how to determine if I have configured everything correctly in AWS console that I need to in order to access the site at HTTPS.
Edit 2: This might come close to answering my question, possibly, but I don't know how to do this: "You can use curl, telnet etc from your local machine to verify 443 port status on ELB" -- #vivekyad4v.
ACM(AWS Certificate Manager) supports the AWS resources like ELB, Cloudfront, API Gateway etc. You can add SSL certificates to these
resources via AWS console.
Currently, it doesn't support EC2. You cannot use ACM with EC2 instances, you will need a Load Balancer in front of it. Once you have a load balancer, SSL termination happens on the load balancer & not on the EC2 instance.
Once it is setup, you can change your apache server config to redirect all HTTP requests to HTTPS.
Add certificate to ELB - "https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-update-ssl-cert.html"
Update apache config - "https://aws.amazon.com/premiumsupport/knowledge-center/redirect-http-https-elb/"
No EC2 support - "https://aws.amazon.com/certificate-manager/faqs/"
I have a web app on AWS using CloudFront, an Elastic Load Balancer, and an EC2 host.
I am attempting to place 'Basic Access Authentication' on it to give it simple password protection.
Do any of these AWS services provide this?
I notice that S3 has documentation on requiring the http authentication header, but I don't notice such documentation for the CloudFront, ELB, or EC2 services my app uses.
How can I setup Basic Access Authentication for my app?
It's fairly simple.
Just set up HTTP authentication on your webserver level.
You can follow this if You're using Nginx: https://www.digitalocean.com/community/tutorials/how-to-set-up-basic-http-authentication-with-nginx-on-ubuntu-14-04
And for Apache: https://www.digitalocean.com/community/tutorials/how-to-set-up-password-authentication-with-apache-on-ubuntu-14-04
Let me know if this helps.