AWS CDK - Use FARGATE_SPOT with ApplicationLoadBalancedFargateService - amazon-web-services

I'm using ApplicationLoadBalancedFargateService in my AWS-CDK project (using Java).
As I am mostly experimenting and I don't need stability, I would like to configure the service to spin up only FARGATE_SPOT instances, but I haven't find any way to do it.
Is there any way to do it?

I think I found the solution:
Cluster cluster = new Cluster(...);
cluster.enableFargateCapacityProviders();
ApplicationLoadBalancedFargateService fargateService = new ApplicationLoadBalancedFargateService(
this,
"FargateService",
ApplicationLoadBalancedFargateServiceProps.builder()
.cluster(cluster)
.capacityProviderStrategies(
singletonList(CapacityProviderStrategy.builder()
.capacityProvider("FARGATE_SPOT")
.weight(1)
.build()
))
....
.build());

Related

AWS CDK and two EKS clusters sharing the same IAM Roles

This is probably not unique to these exact components, but this is where I have encountered this problem:
I am standing up multiple EKS clusters with CDK and they all need to be able to assume certain IAM roles with RBAC to do AWS-y things. For instance:
var AllowExternalDNSUpdatesRole = new Role(
this,
"AllowExternalDNSUpdatesRole",
new RoleProps
{
Description = "Route53 External DNS Role",
InlinePolicies = new Dictionary<string, PolicyDocument>
{
["AllowExternalDNSUpdates"] = externalDnsPolicy
},
RoleName = "AllowExternalDNSUpdatesRole",
AssumedBy = new FederatedPrincipal(Cluster.OpenIdConnectProvider.OpenIdConnectProviderArn, new Dictionary<string, object>
{
["StringLike"] = ExternalDnsCondition,
}, "sts:AssumeRoleWithWebIdentity"),
}
);
I'm giving it a RoleName so I can reference it in a sane way in the Kubernetes yaml files. At the time I'm creating the role, I need to be able to create a FederatedPrincipal referring to the Cluster's OIDC provider so I can drop it in AssumedBy. I can't create the same named role when I stand up the second or nth cluster. It bombs spectacularly.
Ideally I would create these kinds of roles in their own IAM-only stack, and then attach the FederatedPrincipal to the created roles at EKS Cluster creation time. I have tried to figure out how to do that. When these clusters get built up and torn down they would just add and remove themselves from the AssumedBy part of the role. I would love a clue to help me figure that out.
Beyond that the only other thing I can think of to do is to create roles-per-cluster and then have to modify the YAML to refer to the uniquely-named generated roles. That is less than ideal. I'm trying to avoid having to maintain per-cluster yaml files for Kubernetes.
I'm game for other strategies too....

How to register RDS instance with CloudMap

I know this is possible through the AWS CLI and Console as I have done it like this but I would now need to do it in Terraform. I would like to execute the equivalent of the CLI command as aws servicediscovery register-instance.
Pointing to any documentation or examples that can be shared would be most beneficial and appreciated.
This is now possible using the aws_service_discovery_instance resource as of version v3.57.0 of the AWS provider.
resource "aws_service_discovery_instance" "example" {
instance_id = "mydb"
service_id = aws_service_discovery_service.example.id
attributes = {
AWS_INSTANCE_CNAME = aws_db_instance.example.address
}
}
Adding instances to the discovery service is not yet supported:
Add an aws_service_discovery_instance resource
But pull requests has already been preprared for that, so hopefully soon:
resource/aws_service_discovery_instance: new implementation

CDK rds database cluster and assigning existing security group

I'm using AWS CDK rds.DatabaseCluster to create a new RDS Postgres SQL cluster. It's working fine.
However, I can't find an existing way to assign an existing security group to this cluster for security. Has anyone done it before?
You can find an existing security group by using the following:
const securityGroup = SecurityGroup.fromSecurityGroupId(this, 'SG', 'sg-12345', {
mutable: false
});
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ec2.SecurityGroup.html

How to get brokers endpoint of Amazon MSK as an output

we have an AWS cloudformation template, through which we are creating the Amazon MSK(Kafka) cluster. which is working fine.
Now we have multiple applications in our product stack which consume the Brokers endpoints which is created by the Amazon MSK. now to automate the product deployment we decided to create a Route53 recordset for the MSK broker endpoints. we are having hard time finding how we can get a broker endpoints of MSK cluster as an Outputs in AWS Cloudformation templates.
looking forward for suggestion/guidance on this.
Following on #joinEffort answer, this how i did it using custom resources as the CFN resource for an MKS::Cluster does not expose the broker URL:
(option 2 is using boto3 and calling AWS API
The description of the classes and mothods to use from CDK custom resource code can be found here:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Kafka.html#getBootstrapBrokers-property
Option 1: Using custom resource:
def get_bootstrap_servers(self):
create_params = {
"ClusterArn": self._platform_msk_cluster_arn
}
get_bootstrap_brokers = custom_resources.AwsSdkCall(
service='Kafka',
action='getBootstrapBrokers',
region='ap-southeast-2',
physical_resource_id=custom_resources.PhysicalResourceId.of(f'connector-{self._environment_name}'),
parameters = create_params
)
create_update_custom_plugin = custom_resources.AwsCustomResource(self,
'getBootstrapBrokers',
on_create=get_bootstrap_brokers,
on_update=get_bootstrap_brokers,
policy=custom_resources.AwsCustomResourcePolicy.from_sdk_calls(resources=custom_resources.AwsCustomResourcePolicy.ANY_RESOURCE)
)
return create_update_custom_plugin.get_response_field('BootstrapBrokerString')
Option 2: Using boto3:
client = boto3.client('kafka', region_name='ap-southeast-2')
response = client.get_bootstrap_brokers(
ClusterArn='xxx')
#From here u can get the broker urls:
json_response = json.loads(json.dumps(response))
Reff: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kafka.html#Kafka.Client.get_bootstrap_brokers
You should be able to get it from below command. More can be found here.
aws kafka get-bootstrap-brokers --region us-east-1 --cluster-arn ClusterArn

How can I generate an execution plan from Terraform configuration without connecting to AWS?

I'm writing a unit test for a Terraform module, and I would like to confirm that the module produces the execution plan that I expect. However, connecting to Amazon within a test would take too long and require too much configuration of the continuous integration server.
How can I use terraform plan to generate an execution plan from my configuration that assumes that no resources exist?
I've been considering something similar for a testing framework around Terraform modules and have previously used Moto for mocking Boto calls in Python.
Moto works by monkey patching calls to AWS so only works natively with Python. However it does provide the mocked backend as a server running on Flask to be used in a stand alone mode.
That said, I've just tried it with Terraform and while plans seem to work okay a very basic configuration being applied led to this error:
* aws_instance.web: Error launching source instance: SerializationError: failed decoding EC2 Query response
caused by: parsing time "2015-01-01T00:00:00+0000" as "2006-01-02T15:04:05Z": cannot parse "+0000" as "Z"
I then happened to notice that plans complete fine even when the Moto server isn't running and I'm just using a non existent local endpoint in the AWS provider.
As such, if you just need plans then you should be able to do this by adding an endpoint block that points to localhost like this:
provider "aws" {
skip_credentials_validation = true
max_retries = 1
skip_metadata_api_check = true
access_key = "a"
secret_key = "a"
region = "us-west-2"
endpoints {
ec2 = "http://127.0.0.1:5000/"
}
}
resource "aws_instance" "web" {
ami = "ami-123456"
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
}
How you inject that endpoint block in for testing and not for real world usage is probably another question and would need more information in how your tests are being constructed.
Does terraform plan -refresh=false do what you want?
I use it to do a "fast plan", without taking the time to refresh the status of all the AWS resources.
Not sure if it actually connects to AWS to do that though.
If you're using a more complicated remote-state setup and that's the part you don't want to configure - you could also try adding an empty tfstate file and pointing to that with the -state option.