Just wondering if anyone has any luck/solution when using AWS SDK to access AWS resource such as S3 when service injected with Istio sidecar.
As Istio's document points out:
traffic will go through Istio sidecar, you will need white list the DNS or IPs.
https is not available. Can only do by changing the format to something like "http://www.google.com:443"
However, AWS SDK handles the https connection hence I can't rewrite the URL. Subsequently, I'll get an "http: server gave HTTP response to HTTPS client" error.
Many thanks.
Related
I am trying to make one of the services (say an echo server for this example) that I have on my EKS cluster within my VPC available to external users. My approach to doing this is to first create an internal/private ALB that routes traffic to the service, and then have an ApiGateway HTTP API send external traffic to the ALB. Here are the steps that I have taken:
Create a security group that allows inbound traffic on port 80 from all ipv4 and ipv6.
From the ApiGateway page create a VPCLink. Select the private subnets associated with my ALB as well as the VPC of EKS cluster and ALB.
Create and ApiGateway HTTP API with the $default auto-deployable stage.
Create a route for ANY /{proxy+}. For the record I have also tried using the path ANY / but still get the same result.
For the path created in step 4 create an integration. The integration is set to use the VPCLink created in step 2. I also select my ALB and its listener for port 80.
However, when I follow the above sequence of steps and go to the address created by ApiGateway I get the following response: {"message": "Not found"}.
Any pointers on what I'm doing wrong would be greatly appreciated.
Are you sure the API Gateway is deployed? The error message seems to indicate no routes are available.
If it is deployed, but something is wrong with your backend, the error is typically like so:
{"message":"Internal Server Error"}
How can I send https request from one deployment to another deployment using AWS lightsail's private domain?
I've created two AWS Lightsail Container deployments using two docker images. I'd like to send https request from one image deployment ("sender") to another image deployment ("receiver"). This works fine when the receiver's public endpoint is enabled. However, I don't want to expose this service to the public but instead route traffic using AWS Lightsail's private domain.
My problem is when I try and send https request from "sender" to the "receiver"'s private domain (.service.local:) I get https://<service_name>.service.local:52020/tester/status net::ERR_NAME_NOT_RESOLVED on the "sender"'s html page. According to the Lightsail docs (section "Private domain") this should be accessible to my "Lightsail resources in the same AWS Region as your service".
I've found a similar Question & Answer in stackoverflow. I tried this answer using my region but failed because Lightsail container required https while .service.local required http. After creating a Amazon Linux instance, I succeeded making http request but failed to make https request. (screenshot below). In the meantime, Lightsail strictly asks you to use https.
If I force to send http request from https webpage, chrome generates Mixed content: The page at ... was loaded over HTTPS but requested an insecure ... error. I can go around the https problem by using next.js api routes, but this doesn't feel secure because next.js api routes are publicly accessible.
Is there anything that I may be missing here?
Things I've verified:
The image is up and running and works fine when connecting to it using the public domain
I'm running both instance and container service in the same region
Thank you in advance.
Some screenshots
Running dig inside docker's entrypoint script
Error message when sender sends http request to receiver
I made my two AWS Lightsail Containers, Frontend Container with next.js and Backend Container with flask, talk to each other using the following steps:
Launch a Lightsail "instance" using "Amazon Linux" in the region I want to deploy my Container. Copy /etc/resolv.conf from this "Amazon Linux" instance. Update Dockerfile to overwrite /etc/resolv.conf file in my docker.
To make API request using http instead of https and go around the Mixed content: The page at ... was loaded over HTTPS but requested an insecure ... error, I used next.js' API route to relay the API request. So, for instance, a page on Frontend Container will make API request to /api on the same Container and the /api route will make http request to Backend Container.
API route was properly coded with security measures so that users cannot use API route to access random endpoint in Backend Container.
https "pages" are often mixed content where resources such as pictures are drawn from the http folders not the "https" site folder, hence the request to get such a resource is http because of its configuration by location, so it will be called by http to obtain and then not be crypted (see server configuration for https folder location that requires access to it by that protocol).
Of protocols, the message from another post implies and may be that to communicate "privately" is NOT a web service for public so such communications require using ssl:// secure protocol (alike using ssh://) NOT https:// secure public web server protocol of both require certificate. (hazard a guess)
ssl may be what is used privately across local.
The following AWS links recommend having differnet accounts for developing and the service.
https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
https://aws.amazon.com/cli/
I'm trying to figure out how I can call my elastic beanstalk environment with HTTPS. Ultimately I want to be able to use API gateway to forward HTTPS requests to it*.
In the elastic beanstalk console I went and configured the load balancer to use my website's SSL cert (mywebsite.com), on port 443 and with an instance port of 80 (whatever that means - I was following https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-elb.html).
Before wiring up API gateway, I first tried calling my elastic beanstalk endpoint. Changing http:// to https://, using postman I got
Error: Hostname/IP does not match certificate's altnames: Host:
myService-prod.eba-p3t3saxf.ap-southeast-1.elasticbeanstalk.com. is
not in the cert's altnames: DNS:*.mywebsite.com
No dice. I then thought maybe if the request originated from my website's domain it might work. So I tried configuring API gateway, but I just get back a 500 Internal server error. (note if I change the endpoint URL inside API gateway from https to http all is good).
So what do I need to do? I tried reading this https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html, but got only as far as I described above. I feel like I do need a certificate, but when I tried using Amazon's certificate manager to generate a certificate for myService-prod.eba-p3t3saxf.ap-southeast-1.elasticbeanstalk.com, I can't validate it (fails both email and DNS certification). I think I don't fully understand what I need to do/see the big pictures. Can someone help me out, ideally with specific instructions.
*Actually, that is a question in itself. If my API gateway endpoint is HTTPS, is it safe for API gateway to then call my elastic beanstalk environment with just HTTP, as we're already inside AWS?
Thanks
I tried using Amazon's certificate manager to generate a certificate for myService-prod.eba-p3t3saxf.ap-southeast-1.elasticbeanstalk.com
You can't generated SSL certificate for this domain. This is AWS owned and manged domain. To get proper valid SSL certificate you have to have your own domain which you control.
From your post its not clear if you actually have a domain mywebsite.com or not. If not, and you want to keep yourself within AWS, you can use Route53 to buy a domain which you want. But any domain provider will be fine. Once you have your own custom domain, you can setup hosted zone in R53 for it, and point it to your EB's load balancer.
Having the domain setup, you can use AWS ACM to issue a valid, public free SSL certificate for your domain and deploy it on the load balancer.
In your API gateway you would use your EB domain for HTTP integrations, not AWS EB default domain.
I got yaml file for specifying ssl certificate (provided by aws certificate manager)to load balancer for kubernetes deployment. But, we are running kubernetes cluster in aws china account where certification manager option is not available. Now if I have SSL certificate provided by Godaddy, how can I install it? Is any other alternative ways to install certificate rather than load balancer? Can I install it in my tomcat container itself and build new image with it?
As far as I know, you cannot setup an ELB deployed with a kubernetes Service to use a certificate which is NOT an ACM certificate. In fact, if you take a look at the possibile annotations here you'll see that the only annotation available to select a certificate is service.beta.kubernetes.io/aws-load-balancer-ssl-cert and the documentation for that annotation says the following:
ServiceAnnotationLoadBalancerCertificate is the annotation used on the
service to request a secure listener. Value is a valid certificate ARN.
For more, see http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html
CertARN is an IAM or CM certificate ARN, e.g. arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
As you ask, you can for sure terminate your ssl inside your kubernetes Pod and make the ELB a simple TCP proxy.
In order to do so, you need to add the following annotation to your Service manifest:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'tcp'
Also, you will need to forward both your http and https ports in order to handle http to https redirect correctly inside you pod.
If you need more specific help, please post you current manifest.
What is my indication that I am using AWS Certificate Manager correctly and that any remaining problems getting my site to load at https are due to a mistake I am making in my Apache configuration?
In AWS Certificate Manager, I see "Success! Your certificate was issued successfully." Does that mean there are no further steps for me to complete in the AWS console, and I need only get my Apache configuration correct to finish?
Currently, when I try to visit a URL at my site with the http protocol, it loads fine, but when I visit at https, the browser tries to load the page but it never loads.
I have followed the instructions for creating an HTTPS listener, but still do not know if I am done with all necessary steps in AWS console. How would I know?
Edit: To clarify, I am using an Elastic Load Balancer (ELB), since the documentation indicated I need to use ELB with AWS Certificate Manager (ACM). However, I do not know how to determine if I have configured everything correctly in AWS console that I need to in order to access the site at HTTPS.
Edit 2: This might come close to answering my question, possibly, but I don't know how to do this: "You can use curl, telnet etc from your local machine to verify 443 port status on ELB" -- #vivekyad4v.
ACM(AWS Certificate Manager) supports the AWS resources like ELB, Cloudfront, API Gateway etc. You can add SSL certificates to these
resources via AWS console.
Currently, it doesn't support EC2. You cannot use ACM with EC2 instances, you will need a Load Balancer in front of it. Once you have a load balancer, SSL termination happens on the load balancer & not on the EC2 instance.
Once it is setup, you can change your apache server config to redirect all HTTP requests to HTTPS.
Add certificate to ELB - "https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-update-ssl-cert.html"
Update apache config - "https://aws.amazon.com/premiumsupport/knowledge-center/redirect-http-https-elb/"
No EC2 support - "https://aws.amazon.com/certificate-manager/faqs/"