How to setup connection between RDS database and EC2 instance via cdk - amazon-web-services

I'm able to create an EC2 instance and an RDS db via the cdk but can't find out how to connect them outside of the aws console.
When setting up the connection manually I get the screen that describes the changes saying "To set up a connection between the database and the EC2 instance, VPC security group xxx-xxx-x is added to the database, and VPC security group xxx-xxx-x is added to the EC2 instance."
Is there a way to do this via cdk?
Is it possible to do this in the instanceProps of the DatabaseCluster?
My existing code for the RDS db cluster looks something like this:
this.dbCluster = new DatabaseCluster(this, 'MyDbCluster',{
//
instanceProps: { vpc: props.vpc, vpcSubnets: { subGroupName: 'private' } }
})
How would I add the group to my existing code for the EC2 instance - in the vpcSubnets section?
const ec2Instance = new Instance( this, 'ec2instance', {
//
vpcSubnets: {
subnetGroupName: 'private',
},
})

You need to allow EC2 to connect to RDS.
You do this by using the DatabaseCluster.connections prop (docs).
Look at the examples in the Connections class docs.
this.dbCluster.connections.allowFrom(ec2Instance, ec2.Port.tcp(5432));
Adjust the port to match your database flavor port.

Related

Can I configure my EKS cluster's inbound rules via CDK?

I am wondering if it is possible to configure the “public access source allowlist” from CDK. I can see and manage this in the console under the networking tab, but can’t find anything in the CDK docs about setting the allowlist during deploy. I tried creating and assigning a security group (code sample below), but this didn't work. Also the security group was created as an "additional" security group, rather than the "cluster" security group.
declare const vpc: ec2.Vpc;
declare const adminRole: iam.Role;
const securityGroup = new ec2.SecurityGroup(this, 'my-security-group', {
vpc,
allowAllOutbound: true,
description: 'Created in CDK',
securityGroupName: 'cluster-security-group'
});
securityGroup.addIngressRule(
ec2.Peer.ipv4('<vpn CIDR block>'),
ec2.Port.tcp(8888),
'allow frontend access from the VPN'
);
const cluster = new eks.Cluster(this, 'my-cluster', {
vpc,
clusterName: 'cluster-cdk',
version: eks.KubernetesVersion.V1_21,
mastersRole: adminRole,
defaultCapacity: 0,
securityGroup
});
Update: I attempted the following, and it updated the cluster security group, but I'm still able to access the frontend when I'm not on the VPN:
cluster.connections.allowFrom(
ec2.Peer.ipv4('<vpn CIDER block>'),
ec2.Port.tcp(8888)
);
Update 2: I tried this as well, and I can still access my application's frontend even when I'm not on the VPN. However I can now only use kubectl when I'm on the VPN, which is good! It's a step forward that I've at least improved the cluster's security in a useful manner.
const cluster = new eks.Cluster(this, 'my-cluster', {
vpc,
clusterName: 'cluster-cdk',
version: eks.KubernetesVersion.V1_21,
mastersRole: adminRole,
defaultCapacity: 0,
endpointAccess: eks.EndpointAccess.PUBLIC_AND_PRIVATE.onlyFrom('<vpn CIDER block>')
});
In general EKS has two relevant security groups:
The one used by nodes, which AWS calls "cluster security group". It's setup automatically by EKS. You shouldn't need to mess with it unless you want (a) more restrictive rules the defaults (b) open your nodes to maintenance taks (e.g.: ssh access). This is what you are acessing via cluster.connections.
The Ingress Load Balancer security group. This is an Application Load balancer created and managed by EKS. In CDK, it can be created like so:
const cluster = new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_22,
albController: {
version: eks.AlbControllerVersion.V2_4_1,
},
});
This will will serve as a gateway for all internal services that need an Ingress. You can access it via the cluster.albController propriety and add rules to it like a regular Application Load Balancer. I have no idea how EKS deals with task communication when an Ingress ALB is not present.
Relevant docs:
Amazon EKS security group considerations
Alb Controller on CDK docs
The ALB propriety for EKS Cluster objects

Allow AWS Aurora VPC Cluster to be publicly accessible using CDK

I have tried configuring the RDS cluster using cluster.connections.allowDefaultPortFromAnyIpv4();
but still I am not able to connect to my postgres instance, it keeps timing out.
I've been trying to figure this out from 2 days but still no luck, not sure what to do
Here is the full code for CDK config.
import { CdkWorkshopStack } from "../stacks/cdk-workshop-stack";
import * as rds from "#aws-cdk/aws-rds";
import * as ec2 from "#aws-cdk/aws-ec2";
import { ServerlessCluster } from "#aws-cdk/aws-rds";
import { Duration } from "#aws-cdk/core";
export const createDbInstance = (
scope: CdkWorkshopStack
): { cluster: ServerlessCluster; dbName: string } => {
// Create the VPC needed for the Aurora Serverless DB cluster
const vpc = new ec2.Vpc(scope, "AuroraVPC");
const dbName = "yt_backup";
// Create the Serverless Aurora DB cluster; set the engine to Postgres
const cluster = new rds.ServerlessCluster(scope, "yt_backup_cluster", {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(
scope,
"ParameterGroup",
"default.aurora-postgresql10"
),
defaultDatabaseName: dbName,
//#ts-ignore
vpc: vpc,
//#ts-ignore
scaling: { autoPause: Duration.minutes(10) }, // Optional. If not set, then instance will pause after 5 minutes
});
cluster.connections.allowDefaultPortFromAnyIpv4();
return { cluster, dbName };
};
This opens the security group to all connections:
cluster.connections.allowDefaultPortFromAnyIpv4();
This (see the link for exactly where you would specified this) would give the database server a public IP, allowing connections from outside the VPC:
publiclyAccessible: true,
However, you are creating a Serverless cluster, which does not support the publicly accessible feature at this time.
Like Mark B mentions a Serverless Aurora DB is not publicly accessible.
Having a database publicly accessible is a bad idea from a security point of view in my opinion. (and definitely not open to 0.0.0.0/0)
An application inside your VPC should connect to the database and if you need to access the database you can use a BastionHostLinux , ssh tunnel or Direct Connect.
You can switch is an "non serverless" database if you really need to as this is publicly accessible if it's on a public subnet and there is an internet gateway for the VPC.

AWS CDK - Add New Security Group to an Existing VPC Endpoint

I have an existing VPC Endpoint, Now using CDK, I need to add a new SecurityGroup to the existing endoint. CDK has an option to Import the endpoint using following method:
const vpce = InterfaceVpcEndpoint.fromInterfaceVpcEndpointAttributes(this, 'TransferVpce', {
port: 443,
vpcEndpointId: "vpce-EndPointID",
}
);
But once Imported, it does not give me an option to Update it by adding a new Security Group. Any suggestions?
InterfaceVpcEndpoint returns Connections where you can find your security_groups. Then you can create your SecurityGroup and modify the rules.

Neptune throwing "Host did not respond in a timely fashion" when trying to connect from private EC2

I have created a neptune instance and per the documentation here...
I create the following yaml...
hosts: [xxx.xxx.us-east-2.neptune.amazonaws.com]
port: 8182
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
And when I try to connect everything seems to work I see...
==>All scripts will now be sent to Gremlin Server - [xx.xx.us-east-2.neptune.amazonaws.com/172.xx.x.xxx:8182] - type
':remote console' to return to local mode
What am I missing why is the query failing?
Have you enabled IAM authentication on your instance?
If yes, you will have to perform some additional steps to connect to the DB.
If no, double check the following:
EC2 instance is in the same VPC as the cluster.
Check the inbound settings for security group attached to the cluster and outbound setting for your EC2 instance.
If it still doesn't connect, I would suggest you to contact AWS Support.
Are you trying to connect from your pc or ec2 instance ?
I think that Neptune is not publicly accessibly
create ec2 instance in the same VPC and try to connect from it

Can a bastion be assigned a specific AWS Elastic IP with Terraform?

We need to whitelist some Elastic IPs from the corporate firewall as allowed destination IPs for SSH. Is there a way to configure a bastion instance with Terraform and assign it a specific Elastic IP? And, likewise, have it return that EIP to the provisioned pool when the bastion is destroyed? Obviously, we don't want EIPs to be deallocated from our AWS account.
Existing answer is outdated. Associating existing Elastic IPs is now possible thanks to this change: https://github.com/hashicorp/terraform/pull/5236
Docs: https://www.terraform.io/docs/providers/aws/r/eip_association.html
Excerpt:
aws_eip_association
Provides an AWS EIP Association as a top level
resource, to associate and disassociate Elastic IPs from AWS Instances
and Network Interfaces.
NOTE: aws_eip_association is useful in scenarios where EIPs are either
pre-existing or distributed to customers or users and therefore cannot
be changed.
Currently Terraform only supports attaching Elastic IPs to EC2 instances upon EIP creation when you can choose to optionally attach it to an instance or an Elastic Network Interface. NAT Gateways currently allow you to associate an EIP with it upon the NAT Gateway being created but that's a slightly special case.
The instance module itself only allows a boolean choice of whether the instance gets a normal public IP address or not. There's a GitHub issue around allowing instances to be associated with pre-existing EIPs but at the time of writing no pull request to support it.
If it's simply a case of wanting to open up a port on your corporate firewall once and not having to touch it for a bastion box that is torn down regularly and you're open to allowing Terraform to create and manage the EIP for you then you could do something like the following:
resource "aws_instance" "bastion" {
ami = "ami-abcdef12"
instance_type = "t2.micro"
tags {
Name = "bastion"
}
}
output "bastion_id" {
value = "${aws_instance.bastion.id}"
}
And in a separate folder altogether you could have your EIP definition and also lookup the outputted instance ID from a remote state file for the bastion host and use that when applying the EIP:
resource "terraform_remote_state" "remote_state" {
backend = "s3"
config {
bucket = "mybucketname"
key = "name_of_key_file"
}
}
resource "aws_eip" "bastion_eip" {
vpc = true
instance = "${terraform_remote_state.remote_state.output.bastion_id}"
lifecycle {
prevent_destroy = true
}
}
In the above example I've used #BMW's approach so that you should get an error in any plan that attempts to destroy the EIP just as a fail safe.
This at least should allow you to use Terraform to build and destroy short lived instances but apply the same EIP to the instance each time so you don't have to change anything on your firewall.
A slightly simpler approach using just Terraform would be to put the EIP definition in the same .tf file/folder as the bastion instance but you would be unable to use Terraform to destroy anything in that folder (including the bastion instance itself) if you kept the lifecycle configuration block as it simply causes an error during the plan. Removing the block simply gets you back to destroying the EIP everytime you destroy the instance.
I spent some time working through this problem and found the other answers helpful, but incomplete.
For those people trying to reallocate an AWS elastic IP using Terraform, we can do so using a combination of terraform_remote_state and the aws_eip_association. Let me explain.
We should use two separate root modules, themselves within a parent folder:
parent_folder
├--elasticip
| └main.tf
└--server
└main.tf
In elasticip/main.tf you can use the following code which will create an elastic IP, and store the state in a local backend so that you can access its output from the server module. The output variable name cannot be 'id', as this will clash with the remote state variable id and it will not work. Just use a different name, such as eip_id.
terraform {
backend "local" {
path = "../terraform-eip.tfstate"
}
}
resource "aws_eip" "main" {
vpc = true
lifecycle {
prevent_destroy = true
}
}
output "eip_id" {
value = "${aws_eip.main.id}"
}
Then in server/main.tf the following code will create a server and associate the elastic IP with it.
data "terraform_remote_state" "eip" {
backend = "local"
config = {
path = "../terraform-eip.tfstate"
}
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.web.id}"
allocation_id = "${data.terraform_remote_state.eip.eip_id}"
#For >= 0.12
#allocation_id = "${data.terraform_remote_state.eip.outputs.eip_id}"
}
resource "aws_instance" "web" {
ami = "insert-your-AMI-ref"
}
With that all set up, you can go into the elasticip folder, run terraform init, and terraform apply to get your elastic IP. Then go into the server folder, and run the same two commands to get your server with its associated elastic IP. From within the server folder you can run terraform destroy and terraform apply and the new server will get the same elastic IP.
we don't want EIPs to be deallocated from our AWS account.
yes, you can block it. set prevent_destroy to true
resource "aws_eip" "bastion_eip" {
count = "${var.num_bastion}"
lifecycle {
prevent_destroy = true
}
}
Regarding EIP assigned, please refer #ydaetskcoR's reply
If you're using an autoscaling group you can do it in the user data. https://forums.aws.amazon.com/thread.jspa?threadID=52601
#!/bin/bash
# configure AWS
aws configure set aws_access_key_id {MY_ACCESS_KEY}
aws configure set aws_secret_access_key {MY_SECRET_KEY}
aws configure set region {MY_REGION}
# associate Elastic IP
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
ALLOCATION_ID={MY_EIP_ALLOC_ID}
aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOCATION_ID --allow-reassociation