How best to retrieve AWS SSM parameters from the AWS CDK? - amazon-web-services

Apologies if this is a duplicate, I'm going a bit snowblind with blogs and articles trying to find a solution.
I'm trying to use the AWS CDK to deploy a Stack - specifically a CloudFront Distribution layered over an S3 Bucket. I want to retrieve a cert from Cert Manager, and I also want to update a Hosted Zone in R53.
I want to put the zone ID and cert ARN in SSM Parameter Store, and have my CDK app pull the correct ID/ARN from there, so as not to leave it in my code.
I'm currently pulling the values like this in my Go code:
certArn := awsssm.StringParameter_ValueFromLookup(stack, certArnSSM)
certificate := awscertificatemanager.Certificate_FromCertificateArn(stack, wrapName("certificate"), certArn)
Where certArnSSM is the path to the parameter.
However, when I run the synth I get this:
panic: "ARNs must start with \"arn:\" and have at least 6 components: dummy-value-for-/dev/placeholder/certificateArn"
From some reading, this is expected. However, I'm not sure on the 'best practice' approach to solving it. I'm not totally clear on how to use Lazy to solve this - do I need to create a type and implement the Produce() method?

I was unable to replicate your error. The following synths and deploys without error, correctly retrieving the certArn param from ssm as a valid certificate arn lookup input:
func NewCertLookupStack(scope constructs.Construct, id string, props *awscdk.StackProps) awscdk.Stack {
stack := awscdk.NewStack(scope, &id, &props)
certArn := awsssm.StringParameter_ValueFromLookup(stack, jsii.String("/dummy/certarn"))
certificate := awscertificatemanager.Certificate_FromCertificateArn(stack, jsii.String("Certificate"), certArn)
awscdk.NewCfnOutput(stack, jsii.String("ArnOutput"), &awscdk.CfnOutputProps{
Value: certificate.CertificateArn(), // demonstrate it works: the correct cert arn storeed as a stack output
})
return stack
}

I worked around the issue by making the UUID of the cert a variable in my code, and then constructing an ARN manually. It feels like the wrong way to solve the problem though.
createdArn := jsii.String(fmt.Sprintf("arn:aws:acm:us-east-1:%s:certificate/%s", *sprops.Env.Account, certUuid))
certificate := awscertificatemanager.Certificate_FromCertificateArn(stack, wrapName("certificate"), createdArn)

Related

Concatenate AWS Secrets in aws-cdk for ECS container

how do you go about making a postgres URI connection string from a Credentials.fromGeneratedSecret() call without writing the secrets out using toString()?
I think I read somewhere making a lambda that does that, but man that seems kinda overkill-ish
const dbCreds = Credentials.fromGeneratedSecret("postgres")
const username = dbCreds.username
const password = dbCreds.password
const uri = `postgresql://${username}:${password}#somerdurl/mydb?schema=public`
Pretty sure I can't do the above. However my hasura and api ECS containers need connection strings like the above, so I figure this is probably a solved thing?
If you want to import a secret that already exists in the Secret's Manager you could just do a lookup of the secret by name or ARN. Take a look at the documentation referring how to get a value from AWS Secrets Manager.
Once you have your secret in the code it is easy to pass it on as an environment variable to your application. With CDK it is even possible to pass secrets from Secrets Manager or AWS Systems Manager Param Store directly onto the CDK construct. One such example would be (as pointed in the documentation):
taskDefinition.addContainer('container', {
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"),
memoryLimitMiB: 1024,
environment: { // clear text, not for sensitive data
STAGE: 'prod',
},
environmentFiles: [ // list of environment files hosted either on local disk or S3
ecs.EnvironmentFile.fromAsset('./demo-env-file.env'),
ecs.EnvironmentFile.fromBucket(s3Bucket, 'assets/demo-env-file.env'),
],
secrets: { // Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store at container start-up.
SECRET: ecs.Secret.fromSecretsManager(secret),
DB_PASSWORD: ecs.Secret.fromSecretsManager(dbSecret, 'password'), // Reference a specific JSON field, (requires platform version 1.4.0 or later for Fargate tasks)
PARAMETER: ecs.Secret.fromSsmParameter(parameter),
}
});
Overall, in this case, you would not have to do any parsing or printing of the actual secret within the CDK. You can handle all of that processing within you application using properly set environment variables.
However, only from your question it is not clear what exactly you are trying to do. Still, the provided resources should get you in the correct direction.

Migrate a CDK managed CloudFormation distribution from the CloudFrontWebDistribution to the Distribution API

I have an existing CDK setup in which a CloudFormation distribution is configured using the deprecated CloudFrontWebDistribution API, now I need to configure a OriginRequestPolicy, so after some Googling, switched to the Distribution API (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-cloudfront-readme.html) and reused the same "id" -
Distribution distribution = Distribution.Builder.create(this, "CFDistribution")
When I synth the stack I already see in the yaml that the ID - e.g. CloudFrontCFDistribution12345689 - is a different one than the one before.
When trying to deploy it will fail, since the HTTP Origin CNAMEs are already associated with the existing distribution. ("Invalid request provided: One or more of the CNAMEs you provided are already associated with a different resource. (Service: CloudFront, Status Code: 409, Request ID: 123457657, Extended Request ID: null)"
Is there a way to either add the OriginRequestPolicy (I just want to transfer an additional header) to the CloudFrontWebDistribution or a way to use the new Distribution API while maintaining the existing distribution instead of creating a new one?
(The same operation takes around 3 clicks in the AWS Console).
You could use the following trick to assign the logical ID yourself instead of relying on the autogenerated logical ID. The other option is to execute it in two steps, first update it without the additional CNAME and then do a second update with the additional CNAME.
const cfDistro = new Distribution(this, 'distro', {...});
cfDistro.node.defaultChild.overrideLogicalId('CloudfrontDistribution');
This will result in the following stack:
CloudfrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
...
Small edit to explain why this happens:
Since you're switching to a new construct, you're also getting a new logical ID. In order to ensure a rollback is possible, CloudFormation will first create all new resources and create the updated resources that need to be recreated. Only when creating and updating everything is done, it will clean up by removing the old resources. This is also the reason why a two-step approach would work when changing the logical IDs of the resources, or force a normal update by ensuring the same logical ID.
Thanks a lot #stijndepestel - simply assigning the existing logical ID worked on the first try.
Here's the Java variant of the code in the answer
import software.amazon.awscdk.services.cloudfront.CfnDistribution;
...
((CfnDistribution) distribution.getNode().getDefaultChild()).overrideLogicalId("CloudfrontDistribution");

AWS DynamoDB resource not found exception

I have a problem with connection to DynamoDB. I get this exception:
com.amazonaws.services.dynamodb.model.ResourceNotFoundException:
Requested resource not found (Service: AmazonDynamoDB; Status Code:
400; Error Code: ResourceNotFoundException; Request ID: ..
But I have a table and region is correct.
From the docs it's either you don't have a Table with that name or it is in CREATING status.
I would double check to verify that the table does in fact exist, in the correct region, and you're using an access key that can reach it
My problem was stupid but maybe someone has the same... I changed recently the default credentials of aws (~/.aws/credentials), I was testing in another account and forgot to rollback the values to the regular account.
I spent 1 day researching the problem in my project and now I should repay a debt to humanity and reduce the entropy of the universe a little.
Usually, this message says that your client can't reach a table in your DB.
You should check the next things:
1. Your database is running
2. Your accessKey and secretKey are valid for the database
3. Your DB endpoint is valid and contains correct protocol ("http://" or "https://"), and correct hostname, and correct port
4. Your table was created in the database.
5. Your table was created in the database in the same region that you set as a parameter in credentials. Optional, because some
database environments (e.g. Testcontainers Dynalite) don't have an incorrect value for the region. And any nonempty region value will be correct
In my case problem was that I couldn't save and load data from a table in tests with DynamoDB substituted by Testcontainers and Dynalite. I found out that in our project tables creates by Spring component marked with #Component annotation. And in tests, we are using a global setting for lazy loading components to test, so our component didn't load by default because no one call it in the test explicitly. ¯_(ツ)_/¯
If DynamoDB table is in a different region, make sure to set it before initialising the DynamoDB by
AWS.config.update({region: "your-dynamoDB-region" });
This works for me:)
Always ensure that you do one of the following:
The right default region is set up in the AWS CLI configuration files on all the servers, development machines that you are working on.
The best choice is to always specify these constants explicitly in a separate class/config in your project. Always import this in code and use it in the boto3 calls. This will provide flexibility if you were to add or change based on the enterprise requirements.
If your resources are like mine and all over the place, you can define the region_name when you're creating the resource.
I do this for all my instantiations as it forces me to think about what I'm putting/calling where.
boto3.resource("dynamodb", region_name='us-east-2')
I was getting this issue in my .NetCore Application.
Following fixed the issue for me in Startup class --> ConfigureServices method
services.AddDefaultAWSOptions(
new AWSOptions
{
Region = RegionEndpoint.GetBySystemName("eu-west-2")
});
I got Error warning Lambda : lifecycleIteration=0 lambda handler returned an error: ResourceNotFoundException: Requested resource not found
I spent 1 week to fix the issue.
And so its root cause and steps to find issue is mentioned in below Git Issue thread and fixed it.
https://github.com/soto-project/soto/issues/595

Renewing IAM SSL Server Certificates

I have been using IAM server certificates for some of my Elastic Beanstalk applications, but now its time to renew -- what is the correct process for replacing the current certificate with the updated cert?
When I try repeating an upload using the same command as before:
aws iam upload-server-certificate --server-certificate-name foo.bar --certificate-body file://foobar.crt --private-key file://foobar.key --certificate-chain file://chain_bundle.crt
I receive:
A client error (EntityAlreadyExists) occurred when calling the UploadServerCertificate operation: The Server Certificate with name foo.bar already exists.
Is the best practice to simply upload using a DIFFERENT name then switch the load balancers to the new certificate? This makes perfect sense - but I wanted to verify I'm following the correct approach.
EDIT 2015-03-30
I did successfully update my certificate using the technique above. That is - I uploaded the new cert using the same technique as originally, but with a different name, then updated my applications to point to the new certificate.
The question remains however, is this the correct approach?
Yes, that is the correct approach.
Otherwise, you would be forced to roll it out to every system that used it at the same time, with no opportunity to test, first, if desired.
My local practice, which is I don't intend to imply is The One True Way™, yet serves the purpose nicely, is to append -yyyy-mm for the year and month of the certificate's expiration date to the end of the name, making it easy to differentiate between them at a glance... and using this pattern, when the list sorted is lexically, they're coincidentally sorted chronologically as well.

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.