Cannot sts:AssumeRole with a service account for CDK-generated EKS cluster - amazon-iam

Having deployed an EKS 1.21 cluster using CDK, then using https://cert-manager.io/docs/installation/ as a guide, I have attempted to install cert-manager with the end goal of using Let's Encrypt certificates for TLS-enabled services.
Creating IAM policies in my Stack's code:
...
var externalDnsPolicy = new PolicyDocument(
new PolicyDocumentProps
{
Statements = new[]
{
new PolicyStatement(
new PolicyStatementProps
{
Actions = new[] { "route53:ChangeResourceRecordSets", },
Resources = new[] { "arn:aws:route53:::hostedzone/*", },
Effect = Effect.ALLOW,
}
),
new PolicyStatement(
new PolicyStatementProps
{
Actions = new[]
{
"route53:ListHostedZones",
"route53:ListResourceRecordSets",
},
Resources = new[] { "*", },
Effect = Effect.ALLOW,
}
),
}
}
);
var AllowExternalDNSUpdatesRole = new Role(
this,
"AllowExternalDNSUpdatesRole",
new RoleProps
{
Description = "Route53 External DNS Role",
InlinePolicies = new Dictionary<string, PolicyDocument>
{
["AllowExternalDNSUpdates"] = externalDnsPolicy
},
RoleName = "AllowExternalDNSUpdatesRole",
AssumedBy = new ServicePrincipal("eks.amazonaws.com"),
}
);
var certManagerPolicy = new PolicyDocument(new PolicyDocumentProps {
Statements = new []
{
new PolicyStatement(new PolicyStatementProps
{
Effect = Effect.ALLOW,
Actions = new []
{
"route53:GetChange",
},
Resources = new []
{
"arn:aws:route53:::change/*",
}
}),
new PolicyStatement(new PolicyStatementProps
{
Effect = Effect.ALLOW,
Actions = new []
{
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets"
},
Resources = new []
{
"arn:aws:route53:::hostedzone/*",
},
}),
},
});
var AllowCertManagerRole = new Role(
this,
"AllowCertManagerRole",
new RoleProps
{
Description = "Route53 Cert Manager Role",
InlinePolicies = new Dictionary<string, PolicyDocument>
{
["AllowCertManager"] = certManagerPolicy
},
RoleName = "AllowCertManagerRole",
AssumedBy = new ServicePrincipal("eks.amazonaws.com"),
}
);
...
And my cert issuer manifest:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cert-issuer
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::XREMOVEDX:role/AllowCertManagerRole
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cert-issuer-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-issuer
subjects:
- kind: ServiceAccount
name: cert-issuer
namespace: default
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: sometls-net-letsencrypt
spec:
acme:
email: domain#sometls.net
preferredChain: ""
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: sometls-net-letsencrypt-account-key
solvers:
- dns01:
route53:
hostedZoneID: Z999999999999
region: us-east-2
role: arn:aws:iam::XREMOVEDX:role/AllowExternalDNSUpdatesRole
selector:
dnsZones:
- sometls.net
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: sometls-cluster-lets-encrypt
spec:
secretName: somtls-cluster-lets-encrypt
issuerRef:
name: sometls-net-letsencrypt
kind: ClusterIssuer
group: cert-manager.io
subject:
organizations:
- sometls
dnsNames:
- "*.sometls.net"
But I'm getting spammed with these errors, and cert-manager doesn't work:
(combined from similar events): Error presenting challenge: error instantiating route53 challenge solver: unable to assume role: AccessDenied: User: arn:aws:sts::XREMOVEDX:assumed-role/EksStackEast-EksClusterNodegroupDefaultC-U7IJ1PNZ2123/i-007c425b7a5e39123 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XREMOVEDX:role/AllowCertManagerRole status code: 403, request id: 2bd885a2-97a0-4a21-b017-40e099cb4123
I'm very very iffy on how the IAM Roles allow the Kubernetes ServiceAccount to assume them. I must be missing some connection piece that lets the magic of EKS IAM Role for Service Accounts (IRSA) happen.
Please help!
UPDATE: Using CfnJson I am able to create the role so it looks like this:
{
"Role": {
"Path": "/",
"RoleName": "AllowCertManagerRole",
"RoleId": "REDACTED",
"Arn": "arn:aws:iam::REDACTED:role/AllowCertManagerRole",
"CreateDate": "2022-03-24T21:42:32+00:00",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::REDACTED:oidc-provider/oidc.eks.us-east-2.amazonaws.com/id/REDACTED"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"oidc.eks.us-east-2.amazonaws.com/id/REDACTED:sub": "system:serviceaccount:*:cert-issuer"
}
}
}
]
},
"Description": "Route53 Cert Manager Role",
"MaxSessionDuration": 3600,
"Tags": [
{
"Key": "dynasty",
"Value": "sometls-1.0"
}
],
"RoleLastUsed": {}
}
}
I'm still getting the same errors. The condition in the new Role uses the "StringLike" operator. Not sure if that is correct or not, and I'm not sure how to avoid needing to use a non-derived lvalue when setting up the IDictionary<string, object> for the conditions. Also-- the error message is the same in that it expects to be able to sts:AssumeRole not sts:AssumeRoleWithWebIdentity ... I tried changing the action in the Role to sts:AssumeRole with the same effect.
UPDATE #2:
The actual problem with cert-manager was a modification to the install manifests that I missed required for AWS IRSA to work. https://cert-manager.io/docs/configuration/acme/dns01/route53/#service-annotation ... turns out that is really important.
For anyone who wants to see how to add an OIDC provider as a AssumedBy principal with Conditions in C# see snip below. I would have thought there would be a convenience method in AWS CDK that would take care of these machinations automatically. I couldn't find it...
...
var Cluster = new Cluster(this,"EksCluster", new ClusterProps
{ ... });
...
var CertIssuerCondition = new CfnJson(this, "CertIssuerCondition", new CfnJsonProps
{
Value = new Dictionary<string, object>
{
{$"{Cluster.ClusterOpenIdConnectIssuer}:sub", "system:serviceaccount:*:cert-manager"},
}
});
var certManagerPolicy = new PolicyDocument(new PolicyDocumentProps {
Statements = new []
{
new PolicyStatement(new PolicyStatementProps
{
Effect = Effect.ALLOW,
Actions = new []
{
"route53:GetChange",
},
Resources = new []
{
"arn:aws:route53:::change/*",
}
}),
new PolicyStatement(new PolicyStatementProps
{
Effect = Effect.ALLOW,
Actions = new []
{
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets"
},
Resources = new []
{
"arn:aws:route53:::hostedzone/*",
},
}),
new PolicyStatement(new PolicyStatementProps
{
Effect = Effect.ALLOW,
Actions = new[]
{
"route53:ListHostedZonesByName",
},
Resources = new[]
{
"*",
}
}),
},
});
var AllowCertManagerRole = new Role(
this,
"AllowCertManagerRole",
new RoleProps
{
Description = "Route53 Cert Manager Role",
InlinePolicies = new Dictionary<string, PolicyDocument>
{
["AllowCertManager"] = certManagerPolicy
},
RoleName = "AllowCertManagerRole",
AssumedBy = new FederatedPrincipal(Cluster.OpenIdConnectProvider.OpenIdConnectProviderArn, new Dictionary<string, object>
{
["StringLike"] = CertIssuerCondition,
},"sts:AssumeRoleWithWebIdentity")
}
);

The trust relationship of your IAM role looks wrong to me.
You need to use a federated principal pointing to the OIDC provider of your EKS cluster, ideally with a condition that correctly reflects your service account and namespace names.
The principal has to look something like this:
const namespaceName = 'cert-manager'
const serviceAccountName = 'cert-issuer'
// If you're deploying EKS with CloudFormation/CDK you could for example export the OIDC provider ARN and get it with Fn.importValue(...) in your stack.
const oidcProviderUrl = 'oidc.eks.YOUR-REGION.amazonaws.com/id/REDACTED';
// You can use wildcards for the namespace name and/or service account name if you want to have a less restrictive condition.
const conditionValue = `system:serviceaccount:${namespaceName}:${serviceAccountName}`;
const roleCondition = new CfnJson(this.stack, `CertIssuerRoleCondition`, {
value: { [`${oidcProviderUrl}:sub`]: conditionValue }
});
// If you're deploying EKS with CloudFormation/CDK you could for example export the OIDC provider ARN and get it with Fn.importValue(...) in your stack.
const oidcProviderArn = 'arn:aws:iam::REDACTED:oidc-provider/oidc.eks.YOUR-REGION.amazonaws.com/id/REDACTED';
const principal = new FederatedPrincipal(oidcProviderArn, roleCondition, 'sts:AssumeRoleWithWebIdentity');
// Now use that principal for your IAM role.

Related

Elastic Beanstalk manual deploy application version S3 error

When I try to deploy another application version:
I get the following S3 error message:
For test purposes, the attached aws-service-role, instance profile and the user doing the manual deploy have full S3 access.
The service and instance roles look like the following:
createServiceRole(): Role {
const assumedBy = new ServicePrincipal("elasticbeanstalk.amazonaws.com")
return new Role(this, `${this.settings.prefix}-service-role-${this.env}-id`, {
roleName: `${this.settings.serviceName}-service-role-${this.env}`,
assumedBy: assumedBy,
managedPolicies: [
{managedPolicyArn: "arn:aws:iam::aws:policy/AdministratorAccess-AWSElasticBeanstalk"},
{managedPolicyArn: "arn:aws:iam::aws:policy/AmazonS3FullAccess"}
]
})
}
createInstanceProfile(): CfnInstanceProfile {
const assumedBy = new ServicePrincipal("ec2.amazonaws.com")
const role = new Role(this, `${this.settings.serviceName}-access-role-${this.env}-id`, {
roleName: `${this.settings.serviceName}-instance-profile-role-${this.env}`,
assumedBy: assumedBy,
managedPolicies: [
{managedPolicyArn: "arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier"},
{managedPolicyArn: "arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier"},
{managedPolicyArn: "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"},
{managedPolicyArn: "arn:aws:iam::aws:policy/AWSElasticBeanstalkMulticontainerDocker"},
{managedPolicyArn: "arn:aws:iam::aws:policy/AmazonS3FullAccess"}
]
})
return new CfnInstanceProfile(this, `${this.settings.serviceName}-instance-profile-${this.env}-id`, {
instanceProfileName: `${this.settings.serviceName}-instance-profile-${this.env}`,
roles: [role.roleName]
})
}
Any idea?

Lambda batch item failures not working with Terraform

I'm trying to use the option to return just the failed messages back to the SQS for re-processing with the Reporting batch item failures feature.
The Lambda terraform code:
module "lambda" {
source = "xxxx"
version = "0.0.0"
lambda_function_name = "${local.basename}-lambda"
lambda_function_handler = "lambda.handler"
lambda_payload_path = var.lambda_payload_path
role = aws_iam_role.xxxxxxxx.arn
timeout = var.lambda_timeout
memory_size = var.lambda_memory
}
resource "aws_lambda_event_source_mapping" "trigger" {
event_source_arn = aws_sqs_queue.queue.arn
function_name = module.lambda.lambda_arn
function_response_types = ["ReportBatchItemFailures"]
}
The SQS terraform code:
resource "aws_sqs_queue" "queue" {
name = "${lower(local.basename)}"
kms_master_key_id = var.kms_sqs
visibility_timeout_seconds = 300
}
resource "aws_sqs_queue_policy" "allow-s3-notification-policy" {
queue_url = aws_sqs_queue.queue.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "${aws_sqs_queue.queue.arn}",
"Condition": {
"ArnEquals": { "aws:SourceArn": "${aws_s3_bucket.bucket.arn}" }
}
}
]
}
POLICY
}
This is my Lambda handler Typescript code that returns the response back to the SQS:
const batchItemFailures = [];
const batchItemFailures: SQSBatchItemFailure[] = [];
if(failedMessageIds.length === 0){
return {"batchItemFailures": batchItemFailures};
} else if(records.length === failedMessageIds.length){
console.error(`Failed to handle messages: ${failedMessageIds}: ${JSON.stringify(records)}`);
batchItemFailures.push({ itemIdentifier: null });
return {"batchItemFailures": batchItemFailures};
}
const failedMessages = records.filter(record => failedMessageIds.includes(record.messageId));
failedMessageIds.forEach(id => batchItemFailures.push({ itemIdentifier: id }));
console.error(`Failed to handle messages: ${failedMessageIds}: ${JSON.stringify(failedMessages)}`);
return {"batchItemFailures": batchItemFailures};
On the other hand, when I'm throwing a regular error to the queue, it re-process all the batch over again.
Any ideas?
You need to configure your event source mapping (as stated in the link you've provided):
include the value ReportBatchItemFailures in the FunctionResponseTypes
This value should be passed as an item of array of strings (doc) and it looks like this if using YAML SAM/Cloudformation templates (see the last two lines):
MyLambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: path/
Handler: index.handler
Events:
MySQSEvent:
Type: SQS
Properties:
Queue: !GetAtt MyQueue.Arn
BatchSize: 20
FunctionResponseTypes:
- ReportBatchItemFailures
Let me not to write the Terraform version since I have no quick options to test the correctness of that. But I guess you've got the idea: your Lambda is not fully configured for reporting batch item failures yet.
There is a quick option for you to check if this is the root of your problem, just toggle it on in AWS Console replacing temporarily the Lambda trigger:

Problem connecting CloudRun, and LoadBalancer and IAP with a custom domain on GCP

I am trying to put a cloudrun behind a load balancer to use the IAP authentication, I created a custom domain on domains.google.com, but for some reason, I cannot seem to have access to my cloudrun on the domain.
I ran the terraform successfully.
I followed this tutorial skipping the CloudRun creation in Terraform as I already configured it with a cloudbuild.
Here is the Terraform:
version.tf
terraform {
required_version = ">= 0.13"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.11.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 4.11.0"
}
}
}
main.tf
################################################################
# #
# #
# Create VPC access connector #
# #
# #
################################################################
resource "google_vpc_access_connector" "connector" {
name = "vpc-connector"
region = var.region
project = var.project
ip_cidr_range = "10.8.0.0/28"
network = "default"
}
################################################################
# #
# #
# Create network ednpoint group #
# #
# #
################################################################
resource "google_compute_region_network_endpoint_group" "serverless_neg" {
name = "serverless-neg"
network_endpoint_type = "SERVERLESS"
region = var.region
cloud_run {
service = local.run_service_name
}
}
module "lb-http" {
source = "GoogleCloudPlatform/lb-http/google//modules/serverless_negs"
version = "~> 6.2.0"
project = var.project
name = var.lb_name
ssl = true
managed_ssl_certificate_domains = ["ooshotnotiontoolbox.com"]
https_redirect = true
backends = {
default = {
description = null
groups = [
{
group = google_compute_region_network_endpoint_group.serverless_neg.id
}
]
enable_cdn = false
security_policy = null
custom_request_headers = null
custom_response_headers = null
iap_config = {
enable = true
oauth2_client_id = "iap-client-id"
oauth2_client_secret = var.iap_client_secret
}
log_config = {
enable = false
sample_rate = null
}
}
}
}
data "google_iam_policy" "iap" {
binding {
role = "roles/iap.httpsResourceAccessor"
members = [
"domain:my-domain.com",
]
}
}
resource "google_iap_web_backend_service_iam_policy" "policy" {
project = var.project
web_backend_service = "${var.lb_name}-backend-default"
policy_data = data.google_iam_policy.iap.policy_data
depends_on = [
module.lb-http
]
}
output "load-balancer-ip" {
value = module.lb-http.external_ip
}
output "oauth2-redirect-uri" {
value = "https://iap.googleapis.com/v1/oauth/clientIds/${var.iap_client_id}:handleRedirect"
}
And the cloudbuild.yaml:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/${PROJECT_ID}/${_SERVICE_NAME}:$SHORT_SHA', '.' ]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/${PROJECT_ID}/${_SERVICE_NAME}:$SHORT_SHA']
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- '${_SERVICE_NAME}'
- '--region=${_REGION}'
- '--platform=managed'
- '--allow-unauthenticated'
- '--ingress=internal-and-cloud-load-balancing'
- '--vpc-connector=vpc-connector'
- '--memory=1024Mi'
- '--service-account=${_SERVICE_ACCOUNT_ID}#${PROJECT_ID}.iam.gserviceaccount.com'
- '--image=gcr.io/${PROJECT_ID}/${_SERVICE_NAME}:$SHORT_SHA'
- '--set-env-vars=PROJECT_ID=${PROJECT_ID}'
options:
logging: CLOUD_LOGGING_ONLY
The DNS records:
GET https://dns.googleapis.com/dns/v1beta2/projects/notion-integration-dv/managedZones/ooshotnotiontoolbox-com
{
"cloudLoggingConfig": {},
"creationTime": "2022-03-21T21:43:02.173Z",
"description": "DNS zone for domain: ooshotnotiontoolbox.com",
"dnsName": "ooshotnotiontoolbox.com.",
"dnssecConfig": {
"state": "ON",
"defaultKeySpecs": [
{
"keyType": "KEY_SIGNING",
"algorithm": "RSASHA256",
"keyLength": 2048
},
{
"keyType": "ZONE_SIGNING",
"algorithm": "RSASHA256",
"keyLength": 1024
}
],
"nonExistence": "NSEC3"
},
"fingerprint": "2d297a1a7d2348c50000017fae6ef71d",
"id": "3254266459939096773",
"name": "ooshotnotiontoolbox-com",
"nameServers": [
"ns-cloud-e1.googledomains.com.",
"ns-cloud-e2.googledomains.com.",
"ns-cloud-e3.googledomains.com.",
"ns-cloud-e4.googledomains.com."
],
"visibility": "PUBLIC"
}
A rule to add the Loadbalancer's IP address
{
"name": "*.ooshotnotiontoolbox.com.",
"rrdata": [
"34.118.230.189"
],
"ttl": 300,
"type": "A"
}
Load Balancer Certificate won't provision
{
"creationTimestamp": "2022-03-21T14:50:38.853-07:00",
"description": "",
"id": "774651587549547969",
"kind": "compute#sslCertificate",
"managed": {
"status": "PROVISIONING",
"domains": [
"ooshotnotiontoolbox.com"
],
"domainStatus": {
"ooshotnotiontoolbox.com": "FAILED_NOT_VISIBLE"
}
},
"name": "ooshotnotiontoolbox-com-cert",
"selfLink": "projects/notion-integration-dv/global/sslCertificates/ooshotnotiontoolbox-com-cert",
"type": "MANAGED"
}
The errors:
When I go to the https://project-name.a.run.app:
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>403 Forbidden</title>
</head>
<body text="#000000" bgcolor="#ffffff">
<h1>Error: Forbidden</h1>
<h2>Access is forbidden.</h2>
<h2></h2>
</body></html>
When I go to the domain name I bought https://ooshotnotiontoolbox.com:
Safari Can’t Find the Server
In the Load Balancer configuration, under Certificate details

How to avoid parameterized cloud formation from CDK synth

I am working on a CDK app in which I have to create a VPC and EKS cluster. but I am not directly using the CDK to run the cloudformation. I want to create a cloudformation template separately and run it using AWS CLI. But whenever I am creating the cloudformation template EKS has asset parameters, which cause error for me while I run the template. How to avoid those parameters.
These are my files.
bin/eks.ts
#!/usr/bin/env node
import cdk = require('#aws-cdk/core');
import { VPCStack, EKSStack } from '../lib/eks-stack';
import { Construct, TagManager, Tag } from '#aws-cdk/core';
import { Scope } from 'babel__traverse';
const app = new cdk.App();
const environment_variables = { region: "us-east-1", account: "348394859384543" }
const appVPCStack = new VPCStack(app, "appDemoVPCStack", { env: environment_variables })
Tag.add(appVPCStack, "owner", "tamizh");
Tag.add(appVPCStack, "purpose", "testing");
const appEKSStack = new EKSStack(app, "appDemoEKSStack", { env: environment_variables, vpcStack: appVPCStack })
Tag.add(appEKSStack, "owner", "tamizh");
Tag.add(appEKSStack, "purpose", "testing");
app.synth();
lib/eks.ts
import cdk = require('#aws-cdk/core');
import ec2 = require('#aws-cdk/aws-ec2');
import { DefaultInstanceTenancy, GatewayVpcEndpointAwsService, GatewayVpcEndpoint } from '#aws-cdk/aws-ec2';
import { ManagedPolicy } from '#aws-cdk/aws-iam';
import eks = require('#aws-cdk/aws-eks');
import iam = require('#aws-cdk/aws-iam');
import asg = require("#aws-cdk/aws-autoscaling");
import { TagManager, TagType } from '#aws-cdk/core';
export class VPCStack extends cdk.Stack {
public readonly vpc: ec2.Vpc;
private endpoint: GatewayVpcEndpoint;
constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const eksClusterName = this.node.tryGetContext("eks.clusterName");
this.vpc = new ec2.Vpc(this, eksClusterName+'VPC', {
// VPC configurations
})
}
}
export interface EKSProps extends cdk.StackProps {
vpcStack: VPCStack;
}
export class EKSStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props: EKSProps) {
super(scope, id, props);
const vpc = props.vpcStack.vpc;
// Context variables for dynamic configuration
const current_env = this.node.tryGetContext("env.type");
const eksClusterName = this.node.tryGetContext("eks.clusterName");
const nodeGroupKeyName = this.node.tryGetContext("eks.nodeGroupKeyName");
const nodeGroupMaxCapacity = this.node.tryGetContext("eks.nodeGroupMaxCapacity");
const nodeGroupMinCapacity = this.node.tryGetContext("eks.nodeGroupMinCapacity");
const nodeGroupDesiredCapacity = this.node.tryGetContext("eks.nodeGroupDesiredCapacity");
const nodeGroupInstanceType = this.node.tryGetContext("eks.nodeGroupInstanceType");
// Role to access the cluster from using kubeconfig
// aws eks update-kubeconfig --name eks --region <region> --role-arn <role-arn>
const clusterAdmin = new iam.Role(this, eksClusterName+'AdminRole', {
assumedBy: new iam.AccountRootPrincipal()
});
// Cluster properties
const clusterProps = {
clusterName: current_env+eksClusterName,
// Default capacity as 0 denotes infinite number of worker nodes
// To avoid allocate the max number worker node while creating control plane
defaultCapacity: 0,
vpc: vpc,
mastersRole: clusterAdmin
}
// Create a new EKS cluster control plane
const cluster = new eks.Cluster(this, eksClusterName, clusterProps);
const eksOptimizedImage = {
//standard or GPU-optimized
nodeType: eks.NodeType.STANDARD
};
const nodeGroupMachineImage = new eks.EksOptimizedImage(eksOptimizedImage);
// defining autoscaling group for worker nodes which can be scalled up or down at any time
const rcAsg = new asg.AutoScalingGroup(this, current_env+'ASG', {
vpc: vpc,
instanceType: nodeGroupInstanceType,
machineImage: nodeGroupMachineImage,
// Create a keypair to ssh into the worker nodes and give the keypair here
// It should be same account and region
// keyName: nodeGroupKeyName,
minCapacity: nodeGroupMinCapacity,
maxCapacity: nodeGroupMaxCapacity,
desiredCapacity: nodeGroupDesiredCapacity,
updateType: asg.UpdateType.ROLLING_UPDATE,
vpcSubnets: {subnetType: ec2.SubnetType.PRIVATE}
});
cluster.addAutoScalingGroup(rcAsg, {
mapRole: true
})
}
}
Once I ran this, got output template as bellow.
{
"Transform": "AWS::Serverless-2016-10-31",
"Resources": {
// all resources
},
"Parameters": {
"AssetParametersea4957b1606259534983fnjdfs934r4b6ad19a204S3Bucket371D99F8": {
"Type": "String",
"Description": "S3 bucket for asset \"ea4957b1606m93439fmrefew99cc02944b6ad19a204\""
},
// more parameters.
}
}
How to avoid these asset parameters?
The answer is you can't avoid the asset parameters in CDK apps. Not every app has assets, whenever there is a functionality that cannot be done by plain cloudformation then they introduce these asset parameters which will be used during the CDK deploy time.
reference - https://github.com/aws/aws-cdk/issues/5403

How can I access an EC2 instance created by CDK?

I am provisioning an infrastructure using this CDK class:
// imports
export interface AppStackProps extends cdk.StackProps {
repository: ecr.Repository
}
export class AppStack extends cdk.Stack {
public readonly service: ecs.BaseService
constructor(app: cdk.App, id: string, props: AppStackProps) {
super(app, id, props)
const vpc = new ec2.Vpc(this, 'main', { maxAzs: 2 })
const cluster = new ecs.Cluster(this, 'x-workers', { vpc })
cluster.addCapacity('x-workers-asg', {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T2, ec2.InstanceSize.MICRO)
})
const logging = new ecs.AwsLogDriver({ streamPrefix: "x-logs", logRetention: logs.RetentionDays.ONE_DAY })
const taskDef = new ecs.Ec2TaskDefinition(this, "x-task")
taskDef.addContainer("x-container", {
image: ecs.ContainerImage.fromEcrRepository(props.repository),
memoryLimitMiB: 512,
logging,
})
this.service = new ecs.Ec2Service(this, "x-service", {
cluster,
taskDefinition: taskDef,
})
}
}
When I check the AWS panel, I see the instance created without a key pair.
How can I access the instance in this case?
You can go to your AWS account > EC2 > Key Pairs, create a new key pair and then you can pass the key name to the cluster being created:
const cluster = new ecs.Cluster(this, 'x-workers', { vpc })
cluster.addCapacity('x-workers-asg', {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T2, ec2.InstanceSize.MICRO),
keyName: "x" // this can be a parameter if you prefer
})
I got this idea by reading this article on CloudFormation.