AWS ec2Client socket timeout - can't connect through docker to aws services - amazon-web-services

I am trying to programmatically create a key pair with the AWS JS sdk. I am working within a docker container on an EC2 server. My aws credentials are correct as well as the region. After creating the client, I send keypair request with await client.send(command) with the client being the Ec2Client created the credentials, and the command being created by the CreateKeyPairCommand- after which i receive the following error Error [TimeoutError]: Socket timed out without establishing a connection within 1100 ms at Timeout._onTimeout (/usr/src/app/node_modules/#aws-sdk/node-http-handler/dist-cjs/set-connection-timeout.js:12:38) at listOnTimeout (node:internal/timers:564:17) at process.processTimers (node:internal/timers:507:7) { '$metadata': { attempts: 5, totalRetryDelay: 1029 } }
My hunch is that it is an issue with the ports and the way the sdk communicates with aws. Upon further research, I've come to see that aws communicates over ports 443 for HTTPS and 80 for HTTP, which are both exposed for my docker application both in the containers as well as the associated security group. I can ssh into the client and connect with the container, and run requests from postman through the container otherwise. It is only when the aws sdk tries to send a request that this networking error presents itself. If you have any ideas, let me know!

Related

encrypted links from google cloudrun svc to cloudrun svc

Backstory(but possibly can be skipped): The other day, I finished connecting to MySQL full SSL from a Cloud Run service without really doing any SSL cert stuff which was great!!! Just click 'only allow SSL' in GCP and click 'generate server certs', allow my Cloud Run service to have access to database instance, swap out tcp socket factory with google's factory and set some props and it worked which was great!
PROBLEM:
NOW, I am trying to figure out the secure Google Cloud Run service to Cloud Run service security and reading
https://cloud.google.com/run/docs/authenticating/service-to-service
which has us requesting a token over HTTP??? Why is this not over HTTPS? Is communication from my Docker container to the token service actually encrypted?
Can I communicate HTTP to HTTP between two Cloud Run services and it will be encrypted?
thanks,
Dean
From https://cloud.google.com/compute/docs/storing-retrieving-metadata#is_metadata_information_secure:
When you make a request to get information from the metadata server, your request and the subsequent metadata response never leave the physical host that is running the virtual machine instance.
The traffic from your container to the metadata server at http://metadata/ stays entirely within your project and thus SSL is not required, there is no opportunity for it to be intercepted.

AWS IoT MQTT Websocket connection from docker puppeteer problem

I have a webpage that connects to AWS IoT using MQTT through a websocket with an unregistered AWS Cognito identity.
When i go to this page with a web browser (I have tested chrome,firefox,safari and mobile versions) it all works and I am connected.
I want to test the page using puppeteer through docker so i can deploy lots of machines to stress test the page.
When i use puppeteer from my local machine - it works. However, when i try to use puppeteer from inside a docker instance it doesn't.
I am using alekzonder/puppeteer:latest and a simple script that just goes to the page and waits 10 seconds. The page itself loads but the websocket connection fails:
failed: Error during WebSocket handshake: Unexpected response code: 403
Is there something i need to add to the docker image to allow websockets? or does this have something to do with the cognito identity created from a docker instance?

AWS simple email service works on localhost but not on production Amazon EC2 server

So I am working on a mailing API for my website. The scenario is as follows:
Customer connects to endpoint.
API endpoint handles request and sends email using mailing service (which is based on aws sdk).
API returns Ok/BadRequest based on result.
When I am doing this with my API running on localhost everything works fine and I am receiving email on my mailbox as expected. But when I run my API service on Amazon EC2 instance I am getting this:
Response status code does not indicate success: 404 (Not Found)
I double checked that I have .aws/credentials file both on my localhost machine and EC2 instance (ubuntu 16.10). I can reach my API service running on EC2 instance just fine, because I get the BadRequest response. The problem is when mailing service tries to send an email using amazon SES. I believe it's not the code itself because it runs fine on localhost. Any ideas?
In case someone else has the same problem, changing to port 587 solved the problem.
Ec2 has a very strict throttle on port 25 by default. You can get whitelisted on an IP-by-IP basis by filling out a formal request, but if you're sending to SES, our recommendation is to use port 587 or 2587.
For more on the EC2 throttle on port 25:
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-port-25-throttle/
And for more about SES's available SMTP ports:
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-connect.html

AWS API Gateway to .NET Core Web Api running in ECS

EDIT
I now realise that I need to install a certificate on the server and validate the client certificate separately. I'm looking at https://github.com/xavierjohn/ClientCertificateMiddleware
I believe the certificate has to be from one of the CA's listed in AWS doco - http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-supported-certificate-authorities-for-http-endpoints.html
This certificate allows API Gateway to establish a HTTPS connection to the instance and it passes along the client certificate that can be validated.
ORIGINAL POST
I am trying to configure a new microservices environment and I'm having a few issues.
Here is what I'm trying to achieve:
Angular website connects to backend API via API Gateway (URL: gateway.company.com.au)
API Gateway is configured with 4 stages - DEV, UAT, PreProd and PROD
The resources in API Gateway are a HTTP Proxy to the back-end services via a Network Load Balancer. Each service in each stage will get a different port allocated: i.e. 30,000, 30,001, etc for DEV, 31,000, 31,000, etc for UAT
The network load balancer has a DNS of services.company.com.au
AWS ECS hosts the docker containers for the back-end services. These services are .NET Core 2.0 Web API projects
The ECS task definition specifies the container image to use and has a port mapping configured - Host Port: 0 Container Port: 4430. A host port of 0 is dynamically allocated by ECS.
The network load balancer has a listener for each microservice port and forwards the request to a target group. There is a target group for each service for each environment.
The target group includes both EC2 instances in the ECS cluster and ports are dynamically assigned by ECS
This port is then mapped by ECS/Docker to the container port of 4430
In order to prevent clients from calling services.company.com.au directly, the API Gateway is configured with a Client Certificate.
In my Web API, I'm building the web host as follows:
.UseKestrel(options =>
{
options.Listen(new IPEndPoint(IPAddress.Any, 4430), listenOptions =>
{
const string certBody = "-----BEGIN CERTIFICATE----- Copied from API Gateway Client certificate -----END CERTIFICATE-----";
var cert = new X509Certificate2(Encoding.UTF8.GetBytes(certBody));
var httpsConnectionAdapterOptions = new HttpsConnectionAdapterOptions
{
ClientCertificateMode = ClientCertificateMode.AllowCertificate,
SslProtocols = System.Security.Authentication.SslProtocols.Tls,
ServerCertificate = cert
};
listenOptions.UseHttps(httpsConnectionAdapterOptions);
});
})
My DockerFile is:
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80 443
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "microservice.company.com.au.dll"]
When I use Postman to try and access the service, I get a 504 Gateway timeout. The CloudWatch log shows:
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Sending request to http://microservice.company.com.au:30000/service
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Execution failed due to an internal error
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Method completed with status: 504
I've been able to get the following architecture working:
API Gateway
Application Load Balancer - path-based routing to direct to the right container
ECS managing ports on the load balancer
The container listening on HTTP port 80
Unfortunately, this leaves the services open on the DNS of the Application Load Balancer due to API Gateway being able to only access public load balancers.
I'm not sure where it's failing but I suspect I've not configured .NET Core/Kestrel correctly to terminate the SSL using the Client Certificate.
In relation to this overall architecture, it would make things easier if:
The public Application Load Balancer could be used with a HTTPS listener using the Client Certificate of API Gateway to terminate the SSL connection
API Gateway could connect to internal load balancers without using Lambda as a proxy
Any tips or suggestions will be considered but at the moment, the main goal is to get the first architecture working.
I more information is required let me know and I will update the question.
The problem was caused by the security group attached to the EC2 instances that formed the ECS cluster not allowing the correct port range. The security group for each EC2 instance in the cluster needs to allow the ECS dynamic port range.

Why am I getting connect time out error from AWS SES when the limit has been increased?

I had my web app running in EC2 instance(AWS server 1). I have another AWS server 2 where the DB is. I had a verified domain and verified recipient test email address and emails were going out well. One day I did Elastic IP so that AWS 1 talk to AWS 2 for some other purpose. Not sure if that caused the issue. Now I reverted AWS 1 to normal Ipv4 address(removed Elastic IP) and all over my app and for SSH I use normal IPv4 address.
As per other posts, I also contacted AWS and increased the sending
limit.
I also set the outbound rules SMTP and SMTPS. None seems to
be working.
If I run the web app in my localhost with same SES
credentials, emails are sent out. Only when my web app is in Amazon EC2 then emails are not being sent out.
Following is the error that I am getting.
Unable to execute HTTP request: Connect to email.us-west-2.amazonaws.com:443 [email.us-west-2.amazonaws.com/52.94.209.0] failed: connect timed out
It's been 2 days and I am scratching my head to get it resolved. Please help.
PS: As per request, here are the outbound rules
Type : MYSQL/Aurora
Protocol: TCP
Port Range: 3306
Destination : //MyIP
I don't have any other outbound rule.