Write a shell script file on bastion host create using CDK - amazon-web-services

In AWS, to gain access to our RDS instance we setup a dedicated EC2 bastion host that we securely access by invoking the SSM Agent in the EC2 dashboard.
This is done by writing a shell script after connecting to the bastion host, now the script usually disappears after a certain time(?). So, is there any way to create this file using CDK when I create the bastion host?
I tried using CFN.init but to no avail.
this.bastionHost = new BastionHostLinux(this, "BastionHost", {
vpc: inspireStack.vpc,
subnetSelection: { subnetType: SubnetType.PRIVATE_WITH_NAT },
instanceType: InstanceType.of(InstanceClass.T2, InstanceSize.MICRO),
init: CloudFormationInit.fromConfigSets({
configSets: {
default: ["install"],
},
configs: {
install: new InitConfig([
InitCommand.shellCommand("cd ~"),
InitFile.fromString("jomar.sh", "testing 123"),
InitCommand.shellCommand("chmod +x jomar.sh"),
]),
},
})

You can write files to an EC2 instance with cloud-init. Either from an existing file or directly from the TS (a json for instance)
const ec2Instance = new ec2.Instance(this, 'Instance', {
vpc,
instanceType: ec2.InstanceType.of(
ec2.InstanceClass.T4G,
ec2.InstanceSize.MICRO,
),
machineImage: new ec2.AmazonLinuxImage({
generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
cpuType: ec2.AmazonLinuxCpuType.ARM_64,
}),
init: ec2.CloudFormationInit.fromConfigSets({
configSets: {
default: ['install', 'config'],
},
configs: {
install: new ec2.InitConfig([
ec2.InitFile.fromObject('/etc/config.json', {
IP: ec2Eip.ref,
}),
ec2.InitFile.fromFileInline(
'/etc/install.sh',
'./src/asteriskConfig/install.sh',
),
ec2.InitCommand.shellCommand('chmod +x /etc/install.sh'),
ec2.InitCommand.shellCommand('cd /tmp'),
ec2.InitCommand.shellCommand('/etc/install.sh'),
]),
config: new ec2.InitConfig([
ec2.InitFile.fromFileInline(
'/etc/asterisk/pjsip.conf',
'./src/asteriskConfig/pjsip.conf',
),
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.CloudFormationInit.html

I see there are three simple workarounds:
SSM start session contains 'profile' section, where you can add your script as a bash function.
You can create an SSM document that will create this file, so before starting the session you will only need to run this document to create a file...
Save this script on S3 and just download them
Regarding disappearing file - it's strange... This CDK construct is similar to Instance, try to use it instead, and create your script with user-data.

Related

How to create a Database Replica in AWS RDS using RDS instance, created using DatabaseInstanceFromSnapshot

I want to create a DB Read Replica using aws CDK. I am creating a RDS DatabaseInstance using method DatabaseInstanceFromSnapshot in aws CDK. I want to use this RDS instance to create a Read Replica using DatabaseInstanceReadReplica method. DatabaseInstanceReadReplica takes a sourceDatabaseInstance as a parameter. For sourceDatabaseInstance I am passing the DatabaseInstance returned from DatabaseInstanceFromSnapshot method. But I get the following error:
Type 'DatabaseInstanceReadReplica' is missing the following properties from type 'DatabaseInstance': sourceCfnProps, singleUserRotationApplication, multiUserRotationApplication, addRotationSingleUser, addRotationMultiUsers(2739)
How do I fix this issue? Any help is appreciated. Below is the code.
mySqlRdsInstance: DatabaseInstance
mySqlRdsReplicaInstance: DatabaseInstance
this.mySqlRdsInstance = new DatabaseInstanceFromSnapshot(this, props.rdsParameters.instanceName, {
instanceIdentifier: props.rdsParameters.instanceIdentifier,
snapshotIdentifier: props.rdsParameters.snapshotIdentifier || '',
engine: DatabaseInstanceEngine.MYSQL,
vpc: props.vpc,
vpcSubnets: {
subnetType: ec2.SubnetType.PUBLIC,
},
})
this.mySqlRdsReplicaInstance = new DatabaseInstanceReadReplica(this, "", {
sourceDatabaseInstance: this.mySqlRdsInstance,
instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
vpc: props.vpc,
vpcSubnets: {
subnetType: ec2.SubnetType.PUBLIC,
},
})

Pulumi - EFS Id output to EC2 LaunchConfiguration UserData

Using Pulumi, I created an EFS filesystem.
I want to add the mount to a launch configuration userdata by adding:
mount -t efs -o tls fs-xxx:/ /mnt/efs.
How can I add the efs.id to the launch configuration userdata?
(I can't convert an output to a string)
You can't convert an Output to a string, but you can write a string once the output has resolved. You do this with an apply.
You can also use the #pulumi/cloudinit package to make this easier.
The following example is in typescript, but should apply to all Pulumi SDKs:
import * as cloudinit from "#pulumi/cloudinit";
const efs_fs = new aws.efs.FileSystem("foo", {
});
const userData = efs_fs.id.apply(id => cloudinit.getConfig({
gzip: false,
base64Encode: false,
parts: [{
contentType: "text/cloud-config",
content: JSON.stringify({
packages: [
],
mounts: [\"${id}\", '/mnt/efs'],
bootcmd: [
],
runcmd: [
]
})
},
You can then pass userData.rendered to any resource you're trying to create

How to import existing VPC in aws cdk?

Hi I am working on aws cdk. I am trying to get existing non-default vpc. I tried below options.
vpc = ec2.Vpc.from_lookup(self, id = "VPC", vpc_id='vpcid', vpc_name='vpc-dev')
This results in below error
[Error at /LocationCdkStack-cdkstack] Request has expired.
[Warning at /LocationCdkStack-cdkstack/TaskDef/mw-service] Proper policies need to be attached before pulling from ECR repository, or use 'fromEcrRepository'.
Found errors
Other method I tried is
vpc = ec2.Vpc.from_vpc_attributes(self, 'VPC', vpc_id='vpc-839227e7', availability_zones=['ap-southeast-2a','ap-southeast-2b','ap-southeast-2c'])
This results in
[Error at /LocationCdkStack-cdkstack] Request has expired.
[Warning at /LocationCdkStack-cdkstack/TaskDef/mw-service] Proper policies need to be attached before pulling from ECR repository, or use 'fromEcrRepository'.
Found errors
Other method I tried is
vpc = ec2.Vpc.from_lookup(self, id = "VPC", is_default=True) // This will get default vpc and this will work
Can someone help me to get non-default vpc in aws cdk? Any help would be appreciated. Thanks
Take a look at aws_cdk.aws_ec2 documentation and at CDK Runtime Context.
If your VPC is created outside your CDK app, you can use
Vpc.fromLookup(). The CDK CLI will search for the specified VPC in the
the stack’s region and account, and import the subnet configuration.
Looking up can be done by VPC ID, but more flexibly by searching for a
specific tag on the VPC.
Usage:
# Example automatically generated. See https://github.com/aws/jsii/issues/826
from aws_cdk.core import App, Stack, Environment
from aws_cdk import aws_ec2 as ec2
# Information from environment is used to get context information
# so it has to be defined for the stack
stack = MyStack(
app, "MyStack", env=Environment(account="account_id", region="region")
)
# Retrieve VPC information
vpc = ec2.Vpc.from_lookup(stack, "VPC",
# This imports the default VPC but you can also
# specify a 'vpcName' or 'tags'.
is_default=True
)
Update with a relevant example:
vpc = ec2.Vpc.from_lookup(stack, "VPC",
vpc_id = VPC_ID
)
Update with typescript example:
import ec2 = require('#aws-cdk/aws-ec2');
const getExistingVpc = ec2.Vpc.fromLookup(this, 'ImportVPC',{isDefault: true});
More info here.
For AWS CDK v2 or v1(latest), You can use:
// You can either use vpcId OR vpcName and fetch the desired vpc
const getExistingVpc = ec2.Vpc.fromLookup(this, 'ImportVPC',{
vpcId: "VPC_ID",
vpcName: "VPC_NAME"
});
here is simple example
//get VPC Info form AWS account, FYI we are not rebuilding we are referencing
const DefaultVpc = Vpc.fromVpcAttributes(this, 'vpcdev', {
vpcId:'vpc-d0e0000b0',
availabilityZones: core.Fn.getAzs(),
privateSubnetIds: 'subnet-00a0de00',
publicSubnetIds: 'subnet-00a0de00'
});
const yourService = new lambda.Function(this, 'SomeName', {
code: lambda.Code.fromAsset("lambda"),
handler: 'handlers.your_handler',
role: lambdaExecutionRole,
securityGroup: lambdaSecurityGroup,
vpc: DefaultVpc,
runtime: lambda.Runtime.PYTHON_3_7,
timeout: Duration.minutes(2),
});
We can do it easily using ec2.vpc.fromLookup.
https://kuchbhilearning.blogspot.com/2022/10/httpskuchbhilearning.blogspot.comimport-existing-vpc-in-aws-cdk.html
The following dictates how to use the method.

Using AWS CDK and RDS (Aurora), where can I change the Certificate authority?

I am setting up a database cluster (Aurora MySQL 5.7) using the DatabaseCluster Construct from #aws-cdk/aws-rds.
My question, where in the setup can I change the Certificate authority? I want to programmatically setup the database to use rds-ca-2019 instead of rds-ca-2015. Note, I want to change this using CDK, not by "clicking in the AWS GUI".
The image below shows which setting I am referring to.
I have been browsing the docs for RDS CDK, and tried to Google this without success.
This guide describes the manual steps on how to do this.
AWS CDK RDS module
DatabaseCluster Construct
Low-level Cluster (CfnCluster)
BTW, my current current config looks a bit like this:
const cluster = new rds.DatabaseCluster(this, 'aurora-cluster', {
clusterIdentifier: 'aurora-cluster',
engine: rds.DatabaseClusterEngine.AURORA_MYSQL,
masterUser: {
username: 'someuser',
password: 'somepassword'
},
defaultDatabaseName: 'db',
instances: 2,
instanceIdentifierBase: 'aurora-',
instanceProps: {
instanceType: ...,
vpcSubnets: {
subnetType: ec2.SubnetType.PUBLIC,
},
vpc: myVpc
},
removalPolicy: cdk.RemovalPolicy.DESTROY,
parameterGroup: {
parameterGroupName: 'default.aurora-mysql5.7'
},
port: 3306,
storageEncrypted: true
});
Apparently Cloudformation doesn't support the certificate authority field, and therefore CDK can't either.
https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/211
I upvoted the issue; feel free to join me!

ssh to amazon ubuntu instance

when I create amazon ubuntu instance from amazon web console and tries to log in to that instance using ssh from any remote computer I am able to log in but when I create ec2 instance using ansible aws.yml file and tries to do the same, I am unable to connect and got an error Permission denied (publickey) from every remote host except from that host in which I ran ansible script. Am I doing something wrong in my ansible file
Here is my ansiblle yml file
auth: {
auth_url: "",
# This should be your AWS Access Key ID
username: "AKIAJY32VWHYOFOR4J7Q",
# This should be your AWS Secret Access Key
# can be passed as part of cmd line when running the playbook
password: "{{ password | default(lookup('env', 'AWS_SECRET_KEY')) }}"
}
# These variable defines AWS cloud provision attributes
cluster: {
region_name: "us-east-1", #TODO Dynamic fetch
availability_zone: "", #TODO Dynamic fetch based on region
security_group: "Fabric",
target_os: "ubuntu",
image_name: "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*",
image_id: "ami-d15a75c7",
flavor_name: "t2.medium", # "m2.medium" is big enough for Fabric
ssh_user: "ubuntu",
validate_certs: True,
private_net_name: "demonet",
public_key_file: "/home/ubuntu/.ssh/fd.pub",
private_key_file: "/home/ubuntu/.ssh/fd",
ssh_key_name: "fabric",
# This variable indicate what IP should be used, only valid values are
# private_ip or public_ip
node_ip: "public_ip",
container_network: {
Network: "172.16.0.0/16",
SubnetLen: 24,
SubnetMin: "172.16.0.0",
SubnetMax: "172.16.255.0",
Backend: {
Type: "udp",
Port: 8285
}
},
service_ip_range: "172.15.0.0/24",
dns_service_ip: "172.15.0.4",
# the section defines preallocated IP addresses for each node, if there is no
# preallocated IPs, leave it blank
node_ips: [ ],
# fabric network node names expect to be using a clear pattern, this defines
# the prefix for the node names.
name_prefix: "fabric",
domain: "fabricnet",
# stack_size determines how many virtual or physical machines we will have
# each machine will be named ${name_prefix}001 to ${name_prefix}${stack_size}
stack_size: 3,
etcdnodes: ["fabric001", "fabric002", "fabric003"],
builders: ["fabric001"],
flannel_repo: "https://github.com/coreos/flannel/releases/download/v0.7.1/flannel-v0.7.1-linux-amd64.tar.gz",
etcd_repo: "https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz",
k8s_repo: "https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/",
go_ver: "1.8.3",
# If volume want to be used, specify a size in GB, make volume size 0 if wish
# not to use volume from your cloud
volume_size: 8,
# cloud block device name presented on virtual machines.
block_device_name: "/dev/vdb"
}
For Login:
For login using ssh I am doing these steps.
1- Download private key file.
2- chmod 600 private key.
3-ssh -vvv -i ~/.ssh/sshkeys.pem ubuntu#ec.compute-1.amazonaws.com .
I am getting error Permission denied (publickey)
You should be using the key that you created for connecting to AWS instance.
Got to EC2 dashboard and find instances and click on connect on the running instance that you need to ssh to.
It would be something like
ssh -i "XXX.pem" ubuntu#ec2-X-XXX-XX-XX.XX-XXX-2.compute.amazonaws.com
Save XXX.pem from security group to your machine.
Not the ssh keygen of your system