Error creating a skill with Cloudformation - amazon-web-services

i have a month developing alexa skills and want to create then via Cloudformation. And for that i am using this:
Lambda function
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Lambda Function from Cloud Formation by Felix Vazquez",
"Resources": {
"Lambda1": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": "felix-lambda-code",
"S3Key": "hello_lambda.zip"
},
"Description": "Test with Cloud Formation",
"FunctionName": "Felix-hello-world1234",
"Handler": "lambda_function.lambda_handler",
"Role": "arn:aws:iam::776831754616:role/testRol",
"Runtime": "python2.7"
}
}
}
}
Alexa Skill
"Resources": {
"23LT3": {
"Type": "Alexa::ASK::Skill",
"Properties": {
"AuthenticationConfiguration": {
"ClientId": "+my client ID+",
"ClientSecret": "+my client Secret+",
"RefreshToken": "+The token i generate via lwa+"
},
"VendorId": "+my vendor ID+",
"SkillPackage": {
"S3Bucket": "myskillpackagebucket",
"S3Key": "my_function10.zip",
"S3BucketRole": {
"Fn::GetAtt": [
"IAMRU6TJ",
"Arn"
]
},
"Overrides": {
"Manifest": {
"apis": {
"custom": {
"endpoint": {
"uri": {
"Fn::GetAtt": [
"Lambda1",
"Arn"
]
}}}}}}}}
IAM Role
{
"Resources": {
"IAMRU6TJ": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"s3.amazonaws.com",
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}]}}]}}}}
The skill depends on the lambda and the IAM Role. When i "Create the Stack" after some seconds it gives me this error:
Could not assume the provided role. Reason: Access denied (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied; Request ID: b2e8762c-2593-11e9-b3ec-872599411915)
For the Token i use
ask util generate-lwa-tokens --scope "alexa::ask:skills:readwrite alexa::ask:models:readwrite profileā€
image of the events:
Event after execution

your Alexa::ASK::Skill Resource: 23LT3['Properties']['SkillPackage']['S3BucketRole']
The docs say
ARN of the role that grants the Alexa service permission to access the bucket and retrieve the skill package. This role is optional, and if not provided the bucket must be configured with a policy allowing this access, or be publicly accessible, in order for AWS CloudFormation to create the skill.
currently your role is allowing s3.amazonaws.com and lambda.amazonaws.com to Assume a role that can do anything in your AWS account, however you need to allow "The Alexa Service the permission..."
Best Practice would be to use least privilege necessary, but I get it if you are just testing it out.

I struggled to find the details necessary documented anywhere. Here is the role I used to get this working.
AlexaReadRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- sts:AssumeRole
Effect: Allow
Principal:
Service:
- alexa-appkit.amazon.com
Sid: AllowServiceToAssumeRole
Version: 2012-10-17
Policies:
- PolicyName: "AlexaS3Read"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "s3:GetObject"
Resource: "arn:aws:s3:::<bucket-name>/<path-to-alexa-files>/*"
Type: AWS::IAM::Role

Related

Is there a way to create aws lambda execution role with cloudformation?

I'm trying to create a lambda fuction with cloudformation but it requires a lambda execution role - is there a way I can generate one using cloudformation?
Yes, CloudFormation can be used to create an IAM role. The lambda execution role is an IAM role like any other IAM role. The documentation for doing so shows this example:
MyRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument: Json
Description: String
ManagedPolicyArns:
- String
MaxSessionDuration: Integer
Path: String
PermissionsBoundary: String
Policies:
- Policy
RoleName: String
Tags:
- Tag
Then in the lambda, you reference it using a ref to the name of the role resource. Ex:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Role: !Ref MyRole
You can create an IAM role with a role policy where it will take region and account id from predefined AWS FloudFormation variables and assign it to lambda elements in cloud formation. please refer following example
"Resources": {
"AheadLambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {
"Fn::Sub": "AHEADLambdaRole-${EnvName}"
},
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": [
"sts:AssumeRole"
],
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
}
}
],
"Version": "2012-10-17"
},
"Policies": [{
"PolicyDocument" : {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": {
"Fn::Sub": "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:*"
}
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
{ "Fn::Sub": "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/LambdaName:*"}
]
}
]
},
"PolicyName" : "NameOfInlinepolicy"
}]
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess",
"arn:aws:iam::aws:policy/AmazonSSMFullAccess"
],
"Path": "/"
}
}}

AWS S3 creation error: "The event is not supported for notifications (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument"

I'm developing my first lambda in Code9 that suppose to be triggered by S3 event. Unfortunetly, when I'm trying to deploy, I'm constantly getting CloudFormation Error:
"The event is not supported for notifications (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: CF3108325F3C9B60; S3 Extended Request ID: wcWzRXUu7YJn/BVnPDtOx7yBHllhIPELEwsTweqVcfwLw1hkR2iDiSmQbxeL3Hrtp7Kv58ujS2s=; Proxy: null)"
See below CloudFormation events from AWS Mgm Console:
Below is my AWS SAM template.yaml file:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
olatexOrdersInputDirectory:
Type: 'AWS::S3::Bucket'
olatexXlsxOrderLoader:
Type: 'AWS::Serverless::Function'
Properties:
Handler: olatexXlsxOrderLoader/index.handler
Runtime: nodejs12.x
Description: ''
MemorySize: 128
Timeout: 15
Policies:
- AWSLambdaBasicExecutionRole
- AmazonS3FullAccess
- AmazonDynamoDBFullAccess
Events:
S3Event:
Type: S3
Properties:
Bucket: !Ref olatexOrdersInputDirectory
Events: S3:ObjectCreated:*
Lines after Policies: I've added to extend IAM policies because I was suspecting error is related to insufficient privilages but it doesn't helped.
Below I'm attaching CloudFormation templte that is generated from SAM's template.yaml:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "An AWS Serverless Specification template describing your function.",
"Resources": {
"olatexXlsxOrderLoader": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": "cloud9-026528720964-sam-deployments-eu-central-1",
"S3Key": "6aa2a5885a77ea790684cb345d822ed8"
},
"Description": "",
"Tags": [
{
"Value": "SAM",
"Key": "lambda:createdBy"
}
],
"MemorySize": 128,
"Handler": "olatexXlsxOrderLoader/index.handler",
"Role": {
"Fn::GetAtt": [
"olatexXlsxOrderLoaderRole",
"Arn"
]
},
"Timeout": 15,
"Runtime": "nodejs12.x"
}
},
"olatexXlsxOrderLoaderRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sts:AssumeRole"
],
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
}
}
]
},
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole",
"arn:aws:iam::aws:policy/AmazonS3FullAccess",
"arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
],
"Tags": [
{
"Value": "SAM",
"Key": "lambda:createdBy"
}
]
}
},
"olatexOrdersInputDirectory": {
"Type": "AWS::S3::Bucket",
"Properties": {
"NotificationConfiguration": {
"LambdaConfigurations": [
{
"Function": {
"Fn::GetAtt": [
"olatexXlsxOrderLoader",
"Arn"
]
},
"Event": "S3:ObjectCreated:*"
}
]
}
},
"DependsOn": [
"olatexXlsxOrderLoaderS3EventPermission"
]
},
"olatexXlsxOrderLoaderS3EventPermission": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:InvokeFunction",
"SourceAccount": {
"Ref": "AWS::AccountId"
},
"FunctionName": {
"Ref": "olatexXlsxOrderLoader"
},
"Principal": "s3.amazonaws.com"
}
}
}
}
Thanks a lot for all your help!
Regards
Andrzej
Based on the comments.
The issue was caused by using S3:ObjectCreated:*, rather then s3:ObjectCreated:*.
S3 event names are case-sensitive.

Creating CloudWatch rule with AWS CloudFormation not linking to role

I am trying to create a CloudWatch rule that triggers on a schedule and executes a state machine (Step Functions). I'm using CloudFormation to create this, and everything creates fine except for the association of the IAM role used by the rule, to the rule itself. Here is what I mean:
Notice under 'Use Existing Role' it's blank.
Here is the CF template portion that deals with the rule and its role.
"SFInvoke":{
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": {
"Fn::Sub": "states.${AWS::Region}.amazonaws.com"
}
},
"Action": "sts:AssumeRole"
}
]
},
"Policies": [
{
"PolicyName": "StepFunctionsInvoke",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"states:StartExecution"
],
"Resource": { "Ref" : "StateMachine"}
}
]
}
}
]
}
},
"CloudWatchStateMachineSDCEventRule": {
"Type":"AWS::Events::Rule",
"Properties": {
"Description":"CloudWatch trigger for the InSite Static Data Consumer",
"ScheduleExpression": "rate(5 minutes)",
"State":"ENABLED",
"Targets":[{
"Arn":{ "Ref" : "StateMachine"},
"Id":"StateMachineTargetId",
"RoleArn":{
"Fn::GetAtt": [
"SFInvoke",
"Arn"
]
}
}]
}
},
You want the SFInvoke role to show up on the Use existing role selector?
If that is the case, you need to set the Principal to events instead of states.
You're editing the event target in the screenshot above, not the step function. Principal defines the service that can assume the role, in your case that is the events service.
Try this for role creation:
"SFInvoke":{
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Policies": [
{
"PolicyName": "StepFunctionsInvoke",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"states:StartExecution"
],
"Resource": { "Ref" : "StateMachine"}
}
]
}
}
]
}
}
Probably the Yaml would be as:
Based on the Principal: as an Events Based Services and Actions: to start the execution of a StepFunctions State Machine.
AWSEventsInvokeStepFunctions:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: AWSEventsInvokeStepFunctions
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- states:StartExecution
Resource: !Sub "arn:aws:states:${AWS::Region}:${AWS::AccountId}:stateMachine:*"
The Role which is now generic in nature can be applied to a CloudWatch Event Rule, gives Rule with the permissions to be able to start the Execution of a StepFunctions State Machine based on an Amazon S3 Event.
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.s3
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventSource:
- s3.amazonaws.com
eventName:
- PutObject
requestParameters:
bucketName:
- !Ref EventBucket
Targets:
-
RoleArn: !GetAtt AWSEventsInvokeStepFunctions.Arn
Arn: !Sub "arn:aws:states:${AWS::Region}:${AWS::AccountId}:stateMachine:MyStateMachine"
Id: !Sub "StepExecution"
You can probably check more on Start the Execution of State Machine based on Amazon S3 Event

Adding autoscaling to AWS DynamoDB via CloudFormation for already existing table

Our team was quite excited to see autoscaling feature announced for AWS DynamoDB, and when trying it out by adding configuration in web interface we've discovered that's a good match four our applications' needs.
However, attempts to add the same configuration via CF have proved to be a bit more complicated. Following example as provided in this article - http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-examples-application-autoscaling - below is the result, which does not work. Stack detail:
17:02:25 UTC+0300 UPDATE_ROLLBACK_IN_PROGRESS AWS::CloudFormation::Stack my-stack The following resource(s) failed to create: [WriteCapacityScalableTarget].
17:02:24 UTC+0300 CREATE_FAILED AWS::ApplicationAutoScaling::ScalableTarget WriteCapacityScalableTarget table/TableName|dynamodb:table:WriteCapacityUnits|dynamodb already exists
17:02:18 UTC+0300 CREATE_IN_PROGRESS AWS::ApplicationAutoScaling::ScalableTarget WriteCapacityScalableTarget
17:02:15 UTC+0300 CREATE_COMPLETE AWS::IAM::Role ScalingRole
17:01:42 UTC+0300 CREATE_IN_PROGRESS AWS::IAM::Role ScalingRole Resource creation Initiated
17:01:42 UTC+0300 CREATE_IN_PROGRESS AWS::IAM::Role ScalingRole
17:01:37 UTC+0300 UPDATE_IN_PROGRESS AWS::CloudFormation::Stack my-stack User Initiated
My CF script is as follows:
{
"Resources": {
"MyCustomTableName": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "TableName",
"AttributeDefinitions": [
{
"AttributeName": "someAttribute1:someAttribute2",
"AttributeType": "S"
}
],
"KeySchema": [
{
"AttributeName": "someAttribute1:someAttribute2",
"KeyType": "HASH"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": 1,
"WriteCapacityUnits": 1
},
"StreamSpecification": {
"StreamViewType": "NEW_AND_OLD_IMAGES"
}
}
},
"WriteCapacityScalableTarget": {
"Type": "AWS::ApplicationAutoScaling::ScalableTarget",
"Properties": {
"MaxCapacity": 30,
"MinCapacity": 1,
"ResourceId": {
"Fn::Join": [
"/",
[
"table",
{
"Ref": "MyCustomTableName"
}
]
]
},
"RoleARN": {
"Fn::GetAtt": [
"ScalingRole",
"Arn"
]
},
"ScalableDimension": "dynamodb:table:WriteCapacityUnits",
"ServiceNamespace": "dynamodb"
}
},
"ScalingRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"application-autoscaling.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:UpdateTable",
"cloudwatch:PutMetricAlarm",
"cloudwatch:DescribeAlarms",
"cloudwatch:GetMetricStatistics",
"cloudwatch:SetAlarmState",
"cloudwatch:DeleteAlarms"
],
"Resource": "*"
}
]
}
}
]
}
},
"WriteScalingPolicy": {
"Type": "AWS::ApplicationAutoScaling::ScalingPolicy",
"Properties": {
"PolicyName": "WriteAutoScalingPolicy",
"PolicyType": "TargetTrackingScaling",
"ScalingTargetId": {
"Ref": "WriteCapacityScalableTarget"
},
"TargetTrackingScalingPolicyConfiguration": {
"TargetValue": 70.0,
"ScaleInCooldown": 60,
"ScaleOutCooldown": 60,
"PredefinedMetricSpecification": {
"PredefinedMetricType": "DynamoDBWriteCapacityUtilization"
}
}
}
}
}
}
If anyone can shed some light as to why this is happening, I'd be much obliged :)
Going by the error message
CREATE_FAILED AWS::ApplicationAutoScaling::ScalableTarget WriteCapacityScalableTarget table/TableName|dynamodb:table:WriteCapacityUnits|dynamodb already exists
You are creating a DynamoDB resource with the name "TableName": "TableName". You cannot have two dynamoDB tables with same name within a region.
Go to the DynamoDB console and check if you have any such table and delete it. Post that the template should work fine.
Option 2: If you want to go ahead with the existing table, then you can remove the AWS::DynamoDB::Table resource from your CF Template.
For me this error came while setting up auto-scaling for an already existing dynamo table. Make sure you've removed any manual (via console) auto-scaling that you've set up for that dynamo table. Then re-execute and the stack would reach the UPDATE_COMPLETE stage.

How to reference Instance Profile in .ebextension

I need a queue in my elastic beanstalk application and I therefore create the queue and the queue policy with this snippet in my .ebextensions/app.conf:
Resources:
BackgroundTaskQueue:
Type: "AWS::SQS::Queue"
AllowWorkerSQSPolicy:
Type: "AWS::SQS::QueuePolicy"
Properties:
Queues:
-
Ref: "BackgroundTaskQueue"
PolicyDocument:
Version: "2008-10-17"
Id: "PublicationPolicy"
Statement:
-
Sid: "Allow-Create-Task"
Effect: "Allow"
Principal:
AWS: "*"
Action:
- "sqs:SendMessage"
Resource:
Fn::GetAtt:
- "BackgroundTaskQueue"
- "Arn"
Unfortunately I cannot find a way to reference the Instance profile of my EC2 instances in the autoscaling group. (At the moment the queue is open to the world) I tried two approaches:
Reading the configuration:
Principal:
AWS:
Fn::GetOptionSetting:
OptionName: "IamInstanceProfile"
The OptionName is always retrieved from the aws:elasticbeanstalk:customoption namespace but the IamInstanceProfile is defined in the aws:autoscaling:launchconfiguration namespace as far as I know. -> No luck
Reading from the actual AWSEBAutoScalingLaunchConfiguration resource:
Principal:
AWS:
Fn::GetAtt:
- "AWSEBAutoScalingLaunchConfiguration"
- "IamInstanceProfile"
This approach fails cause the property IamInstanceProfile is not exposed.
Has anyone found a way to make such a policy work?
Does anyone know how to instruct GetOptionSetting to look in a different namespace?
Anyone found a way to GetAtt the instance profile?
You need setup the instances profile outside of eb environment. You can use 'aws iam' command to create policies, roles and instance profiles (http://docs.aws.amazon.com/cli/latest/reference/iam/index.html#cli-aws-iam), then specify the profile in option settings:
namespace: aws:autoscaling:launchconfiguration
option_name: IamInstanceProfile
value: your-instance-profile-name
If you are using eb_deployer, there is a self-contained way doing it:
Create a CloudFormation template to define your resources stack, e.g. config/my-resources.json:
{
"Outputs": {
"InstanceProfile": {
"Description": "defines what ec2 instance can do with aws resources",
"Value": { "Ref": "InstanceProfile" }
}
},
"Resources": {
"Role": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": ["ec2.amazonaws.com"]
},
"Action": ["sts:AssumeRole"]
}]
},
"Path": "/",
"Policies": [ {
"PolicyName": "S3Access",
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:PutObject"
],
"Resource": "*"
}
]
}
}, {
"PolicyName": "SQSAccess",
"PolicyDocument": {
"Statement": [ {
"Effect": "Allow",
"Action": [
"sqs:ChangeMessageVisibility",
"sqs:DeleteMessage",
"sqs:ReceiveMessage",
"sqs:SendMessage"
],
"Resource": "*"
}]
}
}]
}
},
"InstanceProfile": {
"Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [ { "Ref": "Role" } ]
}
}
}
}
Add a "resources" section into your eb_deployer.yml
resources:
template: config/my-resources.json
capabilities:
- CAPABILITY_IAM
outputs:
InstanceProfile:
namespace: aws:autoscaling:launchconfiguration
option_name: IamInstanceProfile
In the above example we defined an instance profile with policies enable specific accesses to S3 and SQS. Then map the instance profile name (output of the template) to Elastic Beanstalk option settings.
Take a look at this: https://github.com/ThoughtWorksStudios/eb_deployer/wiki/Elastic-Beanstalk-Tips-and-Tricks#setup-instance-profile-for-your-ec2-instances