Differences between AWS client and GUI - amazon-web-services

I have a simple proof of concept which is working in the GUI. I have an S3 bucket, and a Lambda function, and when the S3 bucket's contents are altered, the Lambda logs the changes to CloudWatch.
I want to build on this and automate as much of the deployment as possible. I have written a shell script that zips up the Lambda's source and uploads it, creates a new version, and grabs the version's ARN to put into the S3 bucket's Event Notification so that the bucket uses the new version.
This works fine in the GUI. I paste the ARN in and the bucket calls whatever version I've pasted in.
However, when I try to script that last step, it always fails with:
An error occurred (InvalidArgument) when calling the PutBucketNotificationConfiguration operation: Unable to validate the following destination configurations
There is never any content listed in the 'following destination configurations'
The command which is failing is:
aws s3api put-bucket-notification-configuration --bucket my_bucket_name --notification-configuration file://config.json
The config file is nearly identical to the output of:
aws s3api get-bucket-notification-configuration --bucket my_bucket_name
Except that the version number at the end of the ARN has changed. Using the output of that command as the input for the previous command is successful.
That looks like:
{
"LambdaFunctionConfigurations": [
{
"Id": "my_S3_event_notificaton",
"LambdaFunctionArn": "arn:aws:lambda:us-east-1:666666666666:function:my_lambda_name:4",
"Events": [
"s3:ObjectCreated:*",
"s3:ObjectRemoved:*"
],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "Prefix",
"Value": ""
}
]
}
}
}
]
}
Changing the '4' at the end to '7' fails.

TL;DR Workaround: omit the :7 qualifier on the notification configuration, use the unqualified (latest) lambda version.
Error replicated
I replicated the OP's error using the CLI and using boto3 directly. put-bucket-notification-configuration fails with qualified arns (e.g. :4) unless a given version has been previously added as a target manually in the S3 Console. Odd. The Chrome network traces of the Console's request-response traffic did not clarify.
# List the lambda versions
res_versions = lambda_client.list_versions_by_function(FunctionName=func)
versions = [v["Version"] for v in res_versions["Versions"]]
# -> ['$LATEST', '1', '2', '3']
# Put the configuration
config = { "LambdaFunctionConfigurations": [{ "LambdaFunctionArn": func + ":2", "Events": ["s3:ObjectCreated:*"] }] }
# fails unless the version has previously been manually set at the target in the Console
# unqualified arn always succeeds as expected
res_put = s3_client.put_bucket_notification_configuration(Bucket=bucket, NotificationConfiguration=config)
Workaround: use the unqualified lambda version
You are manually updating the S3 notification to the latest lambda version. This step is not strictly necessary. Life is easier if you omit the :4 version qualifier on your ARN. "When you invoke a function using an unqualified ARN, Lambda implicitly invokes $LATEST." The notification configuration is set once and does not change how many times you update your lambda.
AWS has invented the wheel
As #Maurice pointed out, you should consider AWS's mature infrastructure-as-code tools like CloudFormation and the Cloud Development Kit whose job it is to "automate as much of the deployment as possible".

Related

Use the AWS CLI in a CDK ShellStep (pipeline) step

I have a CDK Pipeline stack that synths and deploys some infrastructure. After the infrastructure is created, I want to build a frontend react app that knows the URL to the newly constructed API Gateway. Once the app is built, I want to move the built files to a newly created S3 bucket.
I have the first two steps working no problem. I use a CfnOutput to get the API URL and the bucket name. I then use envFromCfnOutputs in my shell step to build the react app with the right env variable set up.
I can't figure out how to move my files to a s3 bucket. I've tried for days to figure out something using s3deploy, but run into various permission issues. I thought I could try to just use the aws cli and move the files manually, but I don't know how to give the CLI command permission to add and delete objects. To make things a bit more complicated, My infrastructure is deployed to a separate account from where my pipeline lives.
Any idea how I can use the CLI or another thought on how I can move the built files to a bucket?
// set up pipeline
const pipeline = new CodePipeline(this, id, {
crossAccountKeys: true,
pipelineName: id,
synth: mySynthStep
});
// add a stage with all my constructs
const pipelineStage = pipelineAddStage(myStage)
// create a shellstep that builds and moves the frontend assets
const frontend = new ShellStep('FrontendBuild', {
input: source,
commands: [
'npm install -g aws-cli',
'cd frontend',
'npm ci',
'VITE_API_BASE_URL="$AWS_API_BASE_URL" npm run build',
'aws s3 sync ./dist/ s3://$AWS_FRONTEND_BUCKET_NAME/ --delete'
],
envFromCfnOutputs: {
AWS_API_BASE_URL: myStage.apiURL,
AWS_FRONTEND_BUCKET_NAME: myStage.bucketName
}
})
// add my step as a poststep to my stage.
pipelineStage.addPost(frontendApp);
I want to give this a shot and also suggest a solution for cross account pipelines.
You figured out the first half of how to build the webapp, this works by passing the output of the cloudformation to the environment of a shell action building the app with the correct outputs (e.g. API endpoint url).
You now could add permissions to a CodeBuildStep and attach a policy there to allow the step to do call certain actions. That should work if your pipeline and your bucket are in the same account (and also cross account with a lot more fiddling). But there is a problem with scoping those permissions:
The Pipeline and the Bucket are created in an order where first the Pipeline is created or self-updated, so you do not know the Bucket Name or anything else at this point. It then deploys the resources to its own account or to another account. So you need to assign a name which is known beforehand. This is a general problem and broadens if you e.g. also need to create a Cloudfront Invalidation and so on.
My approach is the following (in my case for a cross account deployment):
Create a Role alongside the resources which allows the role to do things (e.g. ReadWrite S3 bucket, create Cloudfront Invalidation, ...) with a predefined name and allow a matching principal to assume that role (In my case an Account principal)
Code snippet
const deploymentRole = new IAM.Role(this, "DeploymentRole", {
roleName: "WebappDeploymentRole",
assumedBy: new IAM.AccountPrincipal(pipelineAccountId),
});
// Grant permissions
bucket.grantReadWrite(deploymentRole);
2. Create a `CodeBuildStep` which has permissions to assume that role (by a pre-defined name)
Code snippet
new CodeBuildStep("Deploy Webapp", {
rolePolicyStatements: [
new PolicyStatement({
actions: ["sts:AssumeRole"],
resources: [
`arn:aws:iam::${devStage.account}:role/${webappDeploymentRoleName}`,
],
effect: Effect.ALLOW,
}),
],
...
}
3. In the `commands` i do call `aws sts assume-role` with the predefined role name and save the credentials to the environment for following calls to use
Code snippet
envFromCfnOutputs: {
bucketName: devStage.webappBucketName,
cloudfrontDistributionID: devStage.webbappCloudfrontDistributionId,
},
commands: [
"yarn run build-webapp",
// Assume role, see https://stackoverflow.com/questions/63241009/aws-sts-assume-role-in-one-command
`export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" $(aws sts assume-role --role-arn arn:aws:iam::${devStage.account}:role/${webappDeploymentRoleName} --role-session-name WebappDeploySession --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" --output text))`,
`aws s3 sync ${webappPath}/build s3://$bucketName`,
`aws cloudfront create-invalidation --distribution-id $cloudfrontDistributionID --paths \"/*\"`,
],
4. I do call other aws cli actions like `aws s3 sync ...` with the credentials from step 3. which are now correctly scoped to the actions needed
The ShellStep is likely running under the IAM permissions/role of the Pipeline. Add additional permissions to the Pipeline's role and that should trickle down the AWS CLI call.
You'll also need to probably call buildPipeline before you try to do this:
pipeline.buildPipeline();
pipeline.pipeline.addToRolePolicy(...)

How to add settings for specific Lambdas based on which repository they're being deployed from?

I'm trying to change the maximum-event-age setting for Lambdas using a bash script. Serverless does not currently appear to support this setting, so I'm planning to do it as a bash script after a deploy from GitHub.
Approach:
I'm considering querying aws for the Lambdas in a specific CloudFormation stack. I'm guessing that when a repo is deployed, a new CF stack is created. Then, I want to iterate over the functions and use the put-function-event-invoke-config to change the maximum-event-age setting on each lambda.
Problem:
The put-function-event-invoke-config seems to require a function name. When querying for CF stacks, I'm getting the lambda ARNs instead. I could possibly do some string manipulation to get just the lambda name, but it seems like a messy way to do it.
Am I on the right track with this? Is there a better way?
Edit:
The lambdas already exist and have been deployed. What I think I need to do is run some kind of script that is able to go through the list of lambdas that have been deployed from a single repository (there are multiple repos being deployed to the same environment) and change the maximum-event-age setting that has a default of 6 hours.
Here's an example output when I use the CLI to query CFN with aws cloudformation describe-stacks :
{
"StackId": "arn:aws:cloudformation:us-east-1:***:stack/my-repository-name/0sdg70gfs-6124-12ea-a910-93c4ahj3d140",
"StackName": "my-repository-name",
"Description": "The AWS CloudFormation template for this Serverless application",
"CreationTime": "2019-11-18T22:05:44.246Z",
"LastUpdatedTime": "2019-03-19T23:26:04.382Z",
"RollbackConfiguration": {},
"StackStatus": "UPDATE_COMPLETE",
"DisableRollback": false,
"NotificationARNs": [],
"Capabilities": [
"CAPABILITY_IAM",
"CAPABILITY_NAMED_IAM"
],
"Outputs": [
{
"OutputKey": "TestLambdaFunctionQualifiedArn",
"OutputValue": "arn:aws:lambda:us-east-1:***:function:my-test-function:3",
"Description": "Current Lambda function version"
},
I know that it is possible to run this command to change the maximum-event-age:
$ aws lambda --region us-east-1 put-function-event-invoke-config --function-name my-test-function --maximum-event-age-in-seconds 3600
But it appears to require the --function-name which I don't see in the CFN output in the query above.
How do I programmatically go through all of the functions in a CFN stack and modify the settings for maximum-event-age?
put-function-event-invoke-config accepts ARNs, which means one could query CFN based on stack-names which would correspond to the repo that it was deployed from.
However, I decided to use list-functions to query for Lambdas and then list-tags because our deploys are tagged by repo names. It seemed like a better option than to query CFN (also CFN output ARNs contain a suffix which means put-function-event-invoke-config won't run on them).
Then I can run the text output through a for loop in bash and use put-function-event-invoke-config to add the maximum-event-age setting.

list-buckets s3api is not showing my bucket creation date?

I want to get my s3 bucket creation dates using s3api . But it is not showing the creation date that it is showing in aws console.
When i tried with cli the output is like this
C:\Users\hero>aws s3api list-buckets
{
"Buckets": [
{
"CreationDate": "2018-09-12T11:32:04.000Z",
"Name": "campaign-app-api-prod-serverlessdeploymentbucket-"
},
{
"CreationDate": "2018-09-12T10:06:44.000Z",
"Name": "s3-api-log-events"
}
]
}
In console
Why am i getting different dates in s3api. Is my CreationDate interpretation of is wrong ?
Any help is appreciated.
Thanks
The Date Created field displayed in the web console is according to the actual creation date registered in us-east-1, while the AWS CLI and SDKs will display the creation date depending on the specified region (or the default region set in your configuration).
When using an endpoint other than us-east-1, the CreationDate you receive is actually the last modified time according to the bucket's last replication time in this region. This date can change when making changes to your bucket, such as editing its bucket policy.
So, to get the CreationDates of the buckets that are in s3 console then you need to give the region us-east-1 .
Try like this in aws cli aws s3api list-buckets --region "us-east-1"
Checkout this github issue
This python script:
import boto3
client=boto3.client('s3')
response=client.list_buckets()
returns the same dates as the AWS CLI and s3cmd. Therefore, it is not a bug in the CLI/s3cmd. Instead, it is different information coming from the Amazon S3 API call. So, I'm not sure where the console gets the 'correct' dates.
If there is a bug anywhere, it would be in the ListBuckets API call to AWS. This is best raised with AWS Support.

Claudia.js Auto generated lambda function not displaying on AWS console

So I have created a restAPI backend with express.js and have used claudia.js to upload my endpoints to lambda functions and everything went smoothly. The end points work as expected and return the correct information. My only issue is that when I go to my aws console I do not see the lambda functions that were created. I am not sure where this end point is being hosted. Has anyone else had this issue when working with claudia.js?
In your claudia.json file you should see something like:
"lambda": {
"role": "example-role",
"name": "example-test",
"region": "us-west-2"
},
Being us-west-2 Oregon

AWS: Can't create a trigger for CodeCommit -> Lambda (eu-central-1)

Since AWS has enabled to have CodeCommit, CodeDeploy,... now available also in other regions, I have decided to move some of those services from eu-west-1 to eu-central-1, closer to home.
In the existing setup, I have created a lambda function which gets triggered when a commit is pushed to a CodeCommit repo and sends a nice notification about it to our Slack channel. It works, great.
But now, when I've tried to recreate the same functionality in eu-central-1 (Frankfurt), I got stuck.
I can't seem to create a trigger for CodeCommit to trigger a Lambda function. I've tried in some other regions and it works flawlessly.
I know that:
the rights, roles, policies and permissions are setup correctly
it works in other regions
the code commit item is missing from the list of triggers when you create a Lambda function
If I try to create a trigger the other way around, starting from the code commit side, I get an error:
AWS CodeCommit does not have access to the destination or the destination does not exist.
Any idea if triggering has been forgotten during the implementation of CodeCommit in eu-central-1 or are there any other tricks I can try to get this working?
Thank you!
I believe the issue is that your Lambda function may be missing a policy which allows CodeCommit to invoke your function. If your new lambda function does not specify CodeCommit as a principal, they will not be able to invoke your function.
Usually, the easiest way to have this policy is by setting up your trigger in the Lambda console. Unfortunately, because CodeCommit is newly launched in these regions, this quick setup is not yet available. However, you can still follow the manual setup steps outlined here: http://docs.aws.amazon.com/codecommit/latest/userguide/how-to-notify-lambda-cc.html#how-to-notify-lam-perm
TLDR:
Create a json file 'AllowAccessfromMyDemoRepo.json':
{
"FunctionName": "MyCodeCommitFunction",
"StatementId": "1",
"Action": "lambda:InvokeFunction",
"Principal": "codecommit.amazonaws.com",
"SourceArn": "arn:aws:codecommit:eu-central-1:80398EXAMPLE:MyDemoRepo",
"SourceAccount": "80398EXAMPLE"
}
Run Lambda add-permission api:
aws lambda add-permission --cli-input-json file://AllowAccessfromMyDemoRepo.json
It's simply not yet available at Frankfurt. maybe give a try in Ireland instead?