Ask-cli lambda command uses wrong region setting - amazon-web-services

I am trying to set up a workflow via the ask-cli for developing an Amazon skill with an Amazon lambda backend. I have created a skill and it works fine when using "ask api ..." commands, but when I use an "ask lambda ..." command, such as "ask lambda download -f MySkill", it uses the wrong region setting. I get the error:
ResourceNotFoundException: Function not found: arn:aws:lambda:us-east-1:123456789:function:MySkill
As you can see, it is looking for the lambda instance in us-east-1. But My lambda instance is in eu-west-1, as specified in my skill.json file below. This questions is pretty much a duplicate of https://forums.developer.amazon.com/questions/87922/ask-cli-does-not-use-region-setting-from-aws-confi.html. The answer in that question implies that you can add a region field somewhere in one of the json files, but I can't figure out where. Any help would be appreciated.
This is my ~/.ask/cli_config
{
"profiles": {
"default": {
"aws_profile": "default",
"token": {
"access_token": "My_access_token",
"refresh_token": "My_refresh_token",
"token_type": "bearer",
"expires_in": 3600,
"expires_at": "2017-10-06T14:12:26.171Z"
},
"vendor_id": "My_vendor_id"
}
}
}
this is my ~/.aws/config
[default]
output = text
region = eu-west-1
This is my skill.json, that i get when i call: "ask api get-skill -s skill_id > skill.json"
{
"skillManifest": {
"publishingInformation": {
"locales": {
"en-GB": {
"name": "My Skill"
}
},
"isAvailableWorldwide": true,
"category": "PUBLIC_TRANSPORTATION",
"distributionCountries": []
},
"apis": {
"custom": {
"endpoint": {
"uri": "arn:aws:lambda:eu-west-1:123456789:function:MySkill"
},
"interfaces": []
}
},
"manifestVersion": "1.0"
}
}

for me it works if I edit the following file:
~/.aws/credentials (Linux, macOS, or Unix)
C:\Users\USERNAME\.aws\credentials (Windows)
[ask_cli_default]
aws_access_key_id=YOUR_AWS_ACCESS_KEY
aws_secret_access_key=YOUR_AWS_SECRET_KEY
region=eu-west-1

The region os specified in the lambda section of .ask/config. Example:
"lambda": [
{
"alexaUsage": [
"custom/default"
],
"arn": "arn:aws:lambda:eu-west-1:XXXXXXXXX:function:ask-premium-hello-world",
"awsRegion": "eu-west-1",
"codeUri": "lambda/custom",
"functionName": "ask-premium-hello-world",
"handler": "index.handler",
"revisionId": "XXXXXXXXXXXXXXXXXX",
"runtime": "nodejs8.10"
}
]

Related

How to enable CloudWatch logging and X-ray for stepfunction in Terraform?

In AWS console, we can easily enable cloudwatch logging and X-ray for a step function statemachine, but I want my resource fully managed by Terraform, from this page:https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sfn_state_machine
It seems like Terraform doesn't support this at the moment (also see: https://github.com/hashicorp/terraform-provider-aws/issues/12192)
Does anyone know if there is any workaround to achieve this? I'd really like to be able to enable both cloudwatch logs & X-ray from Terraform. I can't find much info on this. Might someone be able to help please? Many thanks.
UPDATE : This is feature is recently released 3.27.0 (February 05, 2021)
Corresponding documentation link : sfn_state_machine#logging
You can wrap the command for enabling the logging inside terraform null_resource as it showin the in the linked issueEnabling Step Function Logging To CloudWatch #12192, something like below:
Prerequisite :
aws-cli/2.1.1
Before:
{
"stateMachineArn": "arn:aws:states:us-east-1:1234567890:stateMachine:mystatemachine",
"name": "my-state-machine",
"status": "ACTIVE",
"definition": "{\n \"Comment\": \"A Hello World example of the Amazon States Language using an AWS Lambda Function\",\n \"StartAt\": \"HelloWorld\",\n \"States\": {\n \"HelloWorld\": {\n \"Type\": \"Pass\",\n \"End\": true\n }\n }\n}\n",
"roleArn": "arn:aws:iam::1234567890:role/service-role/StepFunctions-MyStateMachine-role-a6146d54",
"type": "STANDARD",
"creationDate": 1611682259.919,
"loggingConfiguration": {
"level": "OFF",
"includeExecutionData": false
}
}
resource "aws_sfn_state_machine" "sfn_state_machine" {
name = "mystatemachine"
role_arn = "arn:aws:iam::1234567890:role/service-role/StepFunctions-MyStateMachine-role-a6146d54"
definition = <<EOF
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda Function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Pass",
"End": true
}
}
}
EOF
}
resource "aws_cloudwatch_log_group" "yada" {
name = "/aws/vendedlogs/states/myloggroup"
}
resource "null_resource" "enable_step_function_logging" {
triggers = {
state_machine_arn = aws_sfn_state_machine.sfn_state_machine.arn
logs_params=<<PARAMS
{
"level":"ALL",
"includeExecutionData":true,
"destinations":[
{
"cloudWatchLogsLogGroup":{
"logGroupArn":"${aws_cloudwatch_log_group.yada.arn}:*"
}
}
]
}
PARAMS
}
provisioner "local-exec" {
command = <<EOT
set -euo pipefail
aws stepfunctions update-state-machine --state-machine-arn ${self.triggers.state_machine_arn} --tracing-configuration enabled=true --logging-configuration='${self.triggers.logs_params}'
EOT
# interpreter = ["bash"]
}
}
After:
{
"stateMachineArn": "arn:aws:states:us-east-1:1234567890:stateMachine:mystatemachine",
"name": "mystatemachine",
"status": "ACTIVE",
"definition": "{\n \"Comment\": \"A Hello World example of the Amazon States Language using an AWS Lambda Function\",\n \"StartAt\": \"HelloWorld\",\n \"States\": {\n \"HelloWorld\": {\n \"Type\": \"Pass\",\n \"End\": true\n }\n }\n}\n",
"roleArn": "arn:aws:iam::1234567890:role/service-role/StepFunctions-MyStateMachine-role-a6146d54",
"type": "STANDARD",
"creationDate": 1611687676.151,
"loggingConfiguration": {
"level": "ALL",
"includeExecutionData": true,
"destinations": [
{
"cloudWatchLogsLogGroup": {
"logGroupArn": "arn:aws:logs:us-east-1:1234567890:log-group:/aws/vendedlogs/states/myloggroup:*"
}
}
]
}
}
Currently, it's still an ongoing feature request on Terraform, you can track the status on this github issue.

AWS CodePipeLine :Execute deploy action in diffent region than the one codepipeline is triggered in

I'm setting up a pipeline to automate cloudformation stack templates deployment.
The pipeline itself is created in the aws eu-west-1 region, but cloudformation stacks templates would be deployed in any other region.
Actually I know and can execute pipeline action in a different account, but I don't see where to specify the region I would like my template to be deployed in, like we do with aws cli : aws --region cloudformation deploy.....
Is there anyway to trigger a pipeline in one region and execute a deploy action in another region please?
The action configuration properties don't offer such possibility...
A workaround would be to run aws cli deploy command from cli in the codebuild container and speficy the good region, But I would like to know if there is a more elegant way to do it
If you're looking to deploy to multiple regions, one after the other, you could create a Code Pipeline pipeline in every region you want to deploy to, and set up S3 cross-region replication so that the output of the first pipeline becomes the input to a pipeline in the next region.
Here's a blog post explaining this further: https://aws.amazon.com/blogs/devops/building-a-cross-regioncross-account-code-deployment-solution-on-aws/
Since late Nov 2018, CodePipeline supports cross regional deploys. However it still leaves a lot to be desired as you need to create artifact buckets in each region and copy over the deployment artifacts (e.g. in the codebuild container as you mentioned) to them before the Deploy action is triggered. So it's not as automated as it could be, but if you go through the process of setting it up, it works well.
CodePipeline now supports cross region deployment and for to trigger the pipeline in different region we can specify the "Region": "us-west-2" property in the action stage for CloudFormation which will trigger the deployment in that specific region.
Steps to follow for this setup:
Create two bucket in two different region which for example bucket in "us-east-1" and bucket in "us-west-2" (We can also use bucket already created by CodePipeline when you will setup pipeline first time in any region)
Configure the pipeline in such a way that is can use respective bucket while taking action in respective account.
specify the region in the action for CodePipeline.
Note: I have attached the sample CloudFormation template which will help you to do the cross region CloudFormation deployment.
{
"Parameters": {
"BranchName": {
"Description": "CodeCommit branch name for all the resources",
"Type": "String",
"Default": "master"
},
"RepositoryName": {
"Description": "CodeComit repository name",
"Type": "String",
"Default": "aws-account-resources"
},
"CFNServiceRoleDeployA": {
"Description": "CFN service role for create resourcecs for account-A",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/CloudFormation-service-role-cp"
},
"CodePipelineServiceRole": {
"Description": "Service role for codepipeline",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/AWS-CodePipeline-Service"
},
"CodePipelineArtifactStoreBucket1": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-east-1"
},
"CodePipelineArtifactStoreBucket2": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-west-2"
}
},
"Resources": {
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Name": {"Fn::Sub": "${AWS::StackName}-cross-account-pipeline" },
"ArtifactStores": [
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket1"
}
},
"Region": "us-east-1"
},
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket2"
}
},
"Region": "us-west-2"
}
],
"RoleArn": {
"Ref": "CodePipelineServiceRole"
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeCommit"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"BranchName": {
"Ref": "BranchName"
},
"RepositoryName": {
"Ref": "RepositoryName"
},
"PollForSourceChanges": true
},
"RunOrder": 1
}
]
},
{
"Name": "Deploy-to-account-A",
"Actions": [
{
"Name": "stage-1",
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"ActionTypeId": {
"Category": "Deploy",
"Owner": "AWS",
"Version": 1,
"Provider": "CloudFormation"
},
"Configuration": {
"ActionMode": "CREATE_UPDATE",
"StackName": "cloudformation-stack-name-account-A",
"TemplatePath":"SourceOutput::accountA.json",
"Capabilities": "CAPABILITY_IAM",
"RoleArn": {
"Ref": "CFNServiceRoleDeployA"
}
},
"RunOrder": 2,
"Region": "us-west-2"
}
]
}
]
}
}
}
}

EMR cluster created with CloudFormation not shown

I have added an EMR cluster to a stack. After updating the stack successfully (CloudFormation), I can see the master and slave nodes in EC2 console and I can SSH into the master node. But AWS console does not show the new cluster. Even aws emr list-clusters doesn't show the cluster. I have triple checked the region and I am certain I'm looking at the right region.
Relevant CloudFormation JSON:
"Spark01EmrCluster": {
"Type": "AWS::EMR::Cluster",
"Properties": {
"Name": "Spark01EmrCluster",
"Applications": [
{
"Name": "Spark"
},
{
"Name": "Ganglia"
},
{
"Name": "Zeppelin"
}
],
"Instances": {
"Ec2KeyName": {"Ref": "KeyName"},
"Ec2SubnetId": {"Ref": "PublicSubnetId"},
"MasterInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Master"
},
"CoreInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Core"
}
},
"Configurations": [
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"ConfigurationProperties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}
]
}
],
"BootstrapActions": [
{
"Name": "InstallPipPackages",
"ScriptBootstrapAction": {
"Path": "[S3 PATH]"
}
}
],
"JobFlowRole": {"Ref": "Spark01InstanceProfile"},
"ServiceRole": "MyStackEmrDefaultRole",
"ReleaseLabel": "emr-5.13.0"
}
}
The reason is missing VisibleToAllUsers property, which defaults to false. Since I'm using AWS Vault (i.e. using STS AssumeRole API to authenticate), I'm basically a different user every time, so I couldn't see the cluster. I couldn't update the stack to add VisibleToAllUsers either as I was getting Job flow ID does not exist.
The solution was to login as root user and fix things from there (I had to delete the cluster manually, but removing it from the stack template JSON and updating the stack would probably have worked if I hadn't messed things up already).
I then added the cluster back to the template (with VisibleToAllUsers set to true) and updated the stack as usual (AWS Vault).

Stack is hung using CloudFormation with SNS-backed CustomResources

I'm trying to learn working of CustomResources in CloudFormation Template. Created simple template to create s3 bucket. But on creating stack, it remains in Create in progress state for long time and no bucket is created.
Is there anything, I'm missing in below validated template:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Building A bucket With customeResources in CloudFormation",
"Parameters" : {
"NewBucket": {
"Default": "",
"Description": "S3 bucket containing customer assets",
"Type": "String"
}
},
"Conditions": {
"NewBucket": {
"Fn::Not": [
{
"Fn::Equals": [
{
"Ref": "NewBucket"
},
""
]
}
]
}
},
"Resources" : {
"CustomResource": {
"Properties": {
"S3Bucket": {
"Ref": "NewBucket"
},
"ServiceToken": "SNS topic ARN"
},
"Type": "AWS::CloudFormation::CustomResource"
}
},
"Outputs": {
"BucketName": {
"Value": {
"Fn::GetAtt": [ "CustomResource", {"Ref": "NewBucket"} ]
}
}
}
}
It would appear that your SNS-backed custom resource is not sending a response back to cloud formation, and it is stuck waiting for that response.
From Amazon Simple Notification Service-backed Custom Resources:
The custom resource provider processes the data sent by the template
developer and determines whether the Create request was successful.
The resource provider then uses the S3 URL sent by AWS CloudFormation
to send a response of either SUCCESS or FAILED.
When the request is made to the SNS service provider, it include the following object:
{
"RequestType": "Create",
"ServiceToken": "arn:aws:sns:us-west-2:2342342342:Critical-Alerts-development",
"ResponseURL": "https:\/\/cloudformation-custom-resource-response-uswest2.s3-us-west-2.amazonaws.com\/arn%3Aaws%3Acloudformation%3Aus-west-2%3A497903502641%3Astack\/custom-resource\/6bf07a80-d44a-11e7-84df-503aca41a029%7CCustomResource%7C5a695f41-61d7-475b-9110-cdbaec04ee55?AWSAccessKeyId=AKIAI4KYMPPRGIACET5Q&Expires=1511887381&Signature=WmHQVqIDCBwQSfcBMpzTfiWHz9I%3D",
"StackId": "arn:aws:cloudformation:us-west-2:asdasdasd:stack\/custom-resource\/6bf07a80-d44a-11e7-84df-503aca41a029",
"RequestId": "5a695f41-61d7-475b-9110-cdbaec04ee55",
"LogicalResourceId": "CustomResource",
"ResourceType": "AWS::CloudFormation::CustomResource",
"ResourceProperties": {
"ServiceToken": "arn:aws:sns:us-west-2:234234234:Critical-Alerts-development",
"S3Bucket": "test-example-com"
}
}
You will need to send a success/fail response to the ResponseURL provided in the event for Cloud Formation to continue processing.
I would also note that the bucket will not be created unless your custom service provider creates it. The Custom Resource function is only sending the request to the provider.

What is wrong with my AWS Lambda function?

I have followed this tutorial to create thumbnails of images to another bucket with AWS Lambda: http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser-create-test-function-upload-zip-test.html
I have done all the steps earlier in the tutorial but when I run the code below in Lambda test from the link above
{
"Records":[
{
"eventVersion":"2.0",
"eventSource":"aws:s3",
"awsRegion":"us-east-1",
"eventTime":"1970-01-01T00:00:00.000Z",
"eventName":"ObjectCreated:Put",
"userIdentity":{
"principalId":"AIDAJDPLRKLG7UEXAMPLE"
},
"requestParameters":{
"sourceIPAddress":"127.0.0.1"
},
"responseElements":{
"x-amz-request-id":"C3D13FE58DE4C810",
"x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"sourcebucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::sourcebucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
}
}
}
]
}
I get the error message
Unable to resize sourcebucket/HappyFace.jpg and upload to
sourcebucketresized/resized-HappyFace.jpg due to an error:
PermanentRedirect: The bucket you are attempting to access must be
addressed using the specified endpoint. Please send all future
requests to this endpoint. END RequestId: 345345...
I have changed the bucket name, eTag and image name. Do I need to change something else? My region are correct. Do I need to edit "principalId"? Where can I find it?
What is wrong.
In my case, the problem was the bucket region. In the example "us-east-1" is used, but my bucket is on "eu-west-1", so i had to change 2 things:
"awsRegion":"eu-west-1", in lambda test file
set region in my nodejs lambda code: AWS.config.update({"region": "eu-west-1"})
And of course you still need to set following values in in lambda test file:
name: 'your_bucket_name_here',
arn: 'arn:aws:s3:::your_bucket_name_here'
After this modifications it worked as expected
Your problem is about "endpoint".You must change "arn":"arn:aws:s3:::sourcebucket" to "arn":"arn:aws:s3:::(name_of_your_bucket)". Same to "name":"sourcebucket" to "name":"(name_of_your_bucket)".
In order to avoid more problems you must upload a jpg called HappyFace.jpg to your bucket or change in s3 put Test object code.
Regards
Try use this updated format (Please carefully configure the key, bucket name,arn and awsRegion to your own settings):
{
"Records": [
{
"eventVersion": "2.0",
"eventTime": "1970-01-01T00:00:00.000Z",
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"s3": {
"configurationId": "testConfigRule",
"object": {
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901",
"key": "HappyFace.jpg",
"size": 1024
},
"bucket": {
"arn": "arn:aws:s3:::myS3bucket",
"name": "myS3bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
},
"s3SchemaVersion": "1.0"
},
"responseElements": {
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH",
"x-amz-request-id": "EXAMPLE123456789"
},
"awsRegion": "us-east-1",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"eventSource": "aws:s3"
}
]
}