How to clone private AWS codecommit repo to on-prem server? - amazon-web-services

We have an AWS account with No NAT or Internet Gateway and a private codecommit repository. We have our VPC direct connection to the on-prem vpc and we have opened port 443 from the on-prem firewall towards the AWS VPC. The on-prem server doesn't have internet connectivity. We have created interface endpoints to access codecommit in the VPC. The on-prem server has an AWS profile with an access key and secret key. Now we would like to clone, update and push the repo to the on-prem server.
Update: since we have configured the AWS profile on the on-prem server. We are trying the git HTTPS URL option to clone the repo. But looks like it is trying to hit the public endpoint of the codecommit service. Therefore it just keeps on trying to get any response from the service and fails in the end. like below
git clone https://git-codecommit.ap-south-1.amazonaws.com/v1/repos/xxx-xxx-projects
Cloning into 'xxx-xxx-projects'...
fatal: unable to access 'https://git-codecommit.ap-south-1.amazonaws.com/v1/repos/xxx-xxx-projects/': Failed to connect to git-codecommit.ap-south-1.amazonaws.com port 443: Connection timed out

You probably haven't set up the routing correctly yet.
First, check if you can access the VPC interface endpoint from within the AWS VPC. An easy way doing that is launching an EC2 instance in that VPC and running the Reachability Analyzer with the instance as source and interface endpoint as target.
If you can't reach the endpoint, verify that you have followed the procedure in the CodeCommit docs for setting up the endpoint correctly. To double check, you can also run nslookup from the EC2 instance in the AWS VPC. This verifies that the IP address correctly resolves to that of the ENI of the VPC interface endpoint. As target for the lookup, use the public DNS address of CodeCommit's Git endpoint (replace region):
nslookup git-codecommit.<YOUR REGION HERE>.amazonaws.com
If this works now, you should check the routing to the public DNS name of CodeCommit from your on-prem network. For the AWS VPC, AWS created a hidden private hosted zone in Route 53 behind the scenes when the interface endpoint was set up, which takes care of routing DNS requests to the public endpoint to the private endpoint of the CodeCommit interface endpoint. This is why the nslookup earlier returns the private IP address of the endpoint rather than the public IP address.
If nslookup doesn't return the same IP address (that of the ENI of the interface endpoint) when you run the command on-prem, you'll have to set up your DNS resolver to route requests to that domain to the AWS VPC-provided DNS (example on how to do this using "Unbound" here).
Hope that helps!

Related

AWS: test ALB with lambda and s3

as described in the image below, I have a Route53 record that calls an ALB that triggers a Lambda, and this Lambda function will clone some code from a git repository to an S3 bucket.
the Route53 is in a private hosted zone, so I can't find a way to send a POST request to the A record of Route53 to trigger the whole process.
is there any way I can test it ?
Private Hosted Zone (HZ) can't be accessed directly from the internet. They are only usable within VPC. Fron docs:
A private hosted zone is a container that holds information about how you want Amazon Route 53 to respond to DNS queries for a domain and its subdomains within one or more VPCs that you create with the Amazon VPC service.
This means that your POST or GET to www.bla-bla-amazon.com will not get resolved over the internet, thus you can't call the dns from the internet. For that, as docs write, you need public HZ:
If you want to route traffic for your domain on the internet, you use a Route 53 public hosted zone. For more information
However, if you want to access the private HZ from outside of a VPC, you can do it indirectly, using VPN or ssh tunnel. SSH tunnel is easier to use and setup for testing and development purposes. For that to work, you need a public EC2 instance in the VPC where your HZ is. From outside of AWS, e.g. home/work, you can setup SSH tunnel with local port forwarding to your EC2, e.g.:
ssh -L -i priavte_ssh_key local_port:private_dns_of_your_service:remote_port ec2-user#ec2-instance-ip
This way, you will be able to use http://localhost:local_port on your home/work computer to access private resources in the VPC.
By the way, if the setup should be private, the ALB should be internal, not internet-facing.

Cannot Pull Container Error in Amazon Elastic Container Service

I am trying to launch a task in Amazon ECS but getting the following error:
CannotPullContainerError: Error response from daemon, request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
I was able to pull container in my local environment and it works fine but getting this error while trying to deploy in amazon environment.
The suggested checks from Amazon are as follows:
Confirm that the subnet used to run a task has a route to an internet gateway or NAT gateway in a route table.
Note: Instead of an internet gateway or NAT gateway, you can use AWS PrivateLink. To avoid errors, be sure to correctly configure AWS PrivateLink or HTTP proxy.
If you're launching tasks in a public subnet, choose ENABLED for Auto-assign public IP when you launch a task in the Amazon EC2 console. This allows your task to have outbound network access to pull an image.
If you're using an Amazon provided DNS in your Amazon VPC, confirm that the security group attached to the instance has outbound access allowed for HTTPS (port 443).
If you're using a custom DNS, confirm that outbound access is allowed for DNS (UDP and TCP) on port 53 and HTTPS access on port 443.
Verify that your network ACL rules aren't blocking traffic to the registry.
This error ultimately points to a network connectivity issue between the subnet or MicroVM your container runs on and the ECS service.
By default it will traverse the public internet (unless you have setup the correct VPC endpoints). So if you do not have outbound internet support you will not be able to connect to the ECR endpoint.

How do I SSH tunnel to a remote server whilst remaining on my machine?

I have a Kubernetes cluster to administer which is in it's own private subnet on AWS. To allow us to administer it, we have a Bastion server on our public subnet. Tunnelling directly through to our cluster is easy. However, we need to have our deployment machine establish a tunnel and execute commands to the Kubernetes server, such as running Helm and kubectl. Does anyone know how to do this?
Many thanks,
John
In AWS
Scenario 1
By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).
if that's the case you can use the kubectl commands from your Concourse server which has internet access using the kubeconfig file provided, if you don't have the kubeconfig file follow these steps
Scenario 2
when you have private cluster endpoint enabled (which seems to be your case)
When you enable endpoint private access for your cluster, Amazon EKS creates a Route 53 private hosted zone on your behalf and associates it with your cluster's VPC. This private hosted zone is managed by Amazon EKS, and it doesn't appear in your account's Route 53 resources. In order for the private hosted zone to properly route traffic to your API server, your VPC must have enableDnsHostnames and enableDnsSupport set to true, and the DHCP options set for your VPC must include AmazonProvidedDNS in its domain name servers list. For more information, see Updating DNS Support for Your VPC in the Amazon VPC User Guide.
Either you can modify your private endpoint Steps here OR Follow these Steps
Probably there are more simple ways to get it done but the first solution which comes to my mind is setting simple ssh port forwarding.
Assuming that you have ssh access to both machines i.e. Concourse has ssh access to Bastion and Bastion has ssh access to Cluster it can be done as follows:
First make so called local ssh port forwarding on Bastion (pretty well described here):
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user#<kubernetes-cluster-ip-address-or-hostname>
Now you can access your kubernetes api from Bastion by:
curl localhost:<kube-api-server-port>
however it isn't still what you need. Now you need to forward it to your Concourse machine. On Concource run:
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user#<bastion-server-ip-address-or-hostname>
From now you have your kubernetes API available on localhost of your Concourse machine so you can e.g. access it with curl:
curl localhost:<kube-api-server-port>
or incorporate it in your .kube/cofig.
Let me know if it helps.
You can also make such tunnel more persistent. More on that you can find here.

AWS ECS: VPC Endpoints and NAT Gateways

According to the AWS documentation on NAT Gateways, they cannot send traffic over VPC endpoints, unless it is setup in the following manner:
A NAT gateway cannot send traffic over VPC endpoints [...]. If your instances in the private subnet must access resources over a VPC endpoint [...], use the private subnet’s route table to route the traffic directly to these devices.
Following this example in the docs, I created the following configuration for my ECS app:
VPC (vpc-app) with CIDR 172.31.0.0/16.
App subnet (subnet-app) with the following route table:
Destination | Target
----------------|-----------
172.31.0.0/16 | local
0.0.0.0/0 | nat-main
NAT Gateway (nat-main) in vpc-app in subnet default-1 with the following Route Table:
Destination | Target
----------------|--------------
172.31.0.0/16 | local
0.0.0.0/0 | igw-xxxxxxxx
Security Group (sg-app) with port 443 open for subnet-app.
VPC Endpoints (Interface type) with vpc-app, subnet-app and sg-app for the following services:
com.amazonaws.eu-west-1.ecr.api
com.amazonaws.eu-west-1.ecr.dkr
com.amazonaws.eu-west-1.ecs
com.amazonaws.eu-west-1.ecs-agent
com.amazonaws.eu-west-1.ecs-telemetry
com.amazonaws.eu-west-1.s3 (Gateway)
It's also important to mention that I've enabled DNS Resolution and DNS Hostnames for vpc-app, as well as the Enable Private DNS Name option for the ecr-dkr and ecr-api VPC endpoints.
I've also tried working only with Fargate containers since they don't have the added complication of the ECS Agent, and because according to the docs:
Tasks using the Fargate launch type only require the com.amazonaws.region.ecr.dkr Amazon ECR VPC endpoint and the Amazon S3 gateway endpoint to take advantage of this feature.
This also doesn't work and every time my Fargate tasks run I see a spike in Bytes out to source under nat-main's Monitoring.
No matter what I try, the EC2 instances (and Fargate tasks) in the subnet-app are still pulling images using nat-main and not going to the local address of the ECR service.
I've restarted the ECS Agent and made sure to check all the boxes in the ECS Interface VPC Endpoints guide AND the ECR Interface Endpoints guide.
What am I missing here?
Any help would be appreciated.
After many hours of trial and error, and with lots of help from #jogold, the missing piece was found in this blog post:
The next step is to create a gateway VPC endpoint for S3. This is necessary because ECR uses S3 to store Docker image layers. When your instances download Docker images from ECR, they must access ECR to get the image manifest and S3 to download the actual image layers.
After I created the S3 Gateway VPCE, I forgot to add its address to subnet-app's routing table, so although the initial request to my ECR URI was made using the internal address, the downloading of the image from S3 still used the NAT Gateway.
After adding the entry, the network usage of the NAT Gateway dropped dramatically.
More information on how to setup Gateway VPCE can be found here.
Interface VPC endpoints work with DNS resolution, not routing.
In order for you configuration to work, you need to ensure that you checked Enable Private DNS Name when you created the endpoint. This enables you to make requests to the service using its default DNS hostname instead of the endpoint-specific DNS hostnames.
From the documentation:
When you create an interface endpoint, we generate endpoint-specific DNS hostnames that you can use to communicate with the service. For AWS services and AWS Marketplace partner services, you can optionally enable private DNS for the endpoint. This option associates a private hosted zone with your VPC. The hosted zone contains a record set for the default DNS name for the service (for example, ec2.us-east-1.amazonaws.com) that resolves to the private IP addresses of the endpoint network interfaces in your VPC. This enables you to make requests to the service using its default DNS hostname instead of the endpoint-specific DNS hostnames. For example, if your existing applications make requests to an AWS service, they can continue to make requests through the interface endpoint without requiring any configuration changes.
The alternative is to update your application to use your endpoint-specific DNS hostnames.
Note that to use private DNS names, DNS resolution and DNS hostnames must be enabled for your VPC:
Also note that in order to use ECR/ECS without a NAT gateway, you need to configure a S3 endpoint (gateway, requires route table update) to allow instances to download the image layers from the underlying private Amazon S3 buckets that host them. More information in Setting up AWS PrivateLink for Amazon ECS, and Amazon ECR

Connect Cloudformation configurated beanstalk setup to an existing AWS site-to-site VPN?

I have this current Cloudformation config setup:
PasteBin example here
This runs a web app, there's also some networking config in there which routes outbound traffic through a nat gateway with an elastic ip.
--
Separately we have a manually created site-to-site VPN setup in AWS
screenshot, elastic ip created by cloudformation :
The other side of the VPN specified that our private ip range for the connection to work has to be in 192.168.242.0/24.
Also they have specifically whitelisted 192.168.242.230 at their end. Which is the private ip of the elasticip which the Cloudformation above created.
How can I establish a connection from my EBS ec2 instance to a server protected by this VPN? At the moment the connection just times out.
You would need to add a route table rule to allow traffic to X.X.X.X/X flow via the Virtual Private Gateway(vgw-xxxxxx)
Destination Target
x.x.x.x/x vgw-xxxxxx