I’m trying to make a DynamoDB table, without having a name property in the .yml file so that it’s name by cloud formation, and export it’s name to python for access can I do that if so how?
My current idea is to to export the name as a ssm parameter but I’m not sure how.
You can tag it in the cloudformation template and get resources by tag in boto.
import boto3
client = boto3.client('rds')
custom_filter = [{
'Name':'tag:Owner',
'Values': ['user#example.com']}]
response = client.describe_instances(Filters=custom_filter)
(This code is mostly copied from https://stackoverflow.com/a/48073016/10553976)
And this would correspond to tagging instances with the following:
Tags:
-
Key: "Owner"
Value: "user#example.com"
Important: if you want to apply the tags to something other than the instances you would need to use a different method from the describe methods in the rds client.
I assume your question is related to AWS CloudFormation, because you mention .yml file.
You can report the name in the output section of your cloudformation template, if your python function is declared in another template. You can then use the describe-stack API or CLI to fetch the value from the output.
See :
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html and https://docs.aws.amazon.com/cli/latest/reference/cloudformation/describe-stacks.html
If your python function is declared in the same template, you can just refer to your logical resource to get the name (as per https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html)
For example,
The section that creates the table
MyTable:
Type: AWS::DynamoDB::Table
Description: your decsriptions
Properties:
... your properties ...
The section that refers to it (here is an example with AWS::AppSync::DataSource but it applies to any type of resources
MyTableDataSource:
Type: AWS::AppSync::DataSource
Properties:
...
DynamoDBConfig:
TableName:
Ref: MyTable
AwsRegion:
Fn::Sub: ${AWS::Region}
or to get the table ARN in a IAM Policy
Policies:
- PolicyName: mypolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:DeleteItem
- dynamodb:UpdateItem
- dynamodb:Query
- dynamodb:Scan
Resource:
- Fn::Join:
- ''
- - Fn::GetAtt:
- MyTable
- Arn
- '*'
According to your reply to my comment, you wish to add the DynamoDB table into the yml file. If the table and the lambda are in the same .yml, then you can simply do !Ref YourTable, inside the Lambda Environment variables.
Something like this:
YourLambda:
Type: AWS::Serverless::Function
Properties:
Environment:
Variables:
YourTableName: !Ref YourTable
You will also need to add a policy attached to the Lambda, under Properties -> Policies, and you can reference the table name there the same way.
However, if you wish to reference the name without moving the DynamoDB instance inside the .yml file, then you have to make it a static reference by making an entry in Parameter Store, and then referencing it like so (making sure your CFN has access to ssm:getParameter):
YourLambda:
Type: AWS::Serverless::Function
Properties:
Environment:
Variables:
YourTableName: '{{resolve:ssm:/PATH/TO/TABLENAME:1}}'
Here’s how I ended up doing it:
Python:
ssm = boto3.client('ssm')
resp = dict(ssm.get_parameter(Name='TableName', WithDecryption=False))
tableName = str(json.loads(json.dumps(resp['Parameter'],default=str))['Value'])
This is my cloudformation yaml file:
Resources:
DynamoTable:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
- AttributeName: A_Key
AttributeType: "S"
- AttributeName: Serial
AttributeType: "S"
KeySchema:
- AttributeName: A_Key
KeyType: HASH
- AttributeName: Serial
KeyType: RANGE
DynamoTableParameter:
Type: "AWS::SSM::Parameter”
Properties:
Name: "TableName”
Type: String
Value: !Ref DynamoTable
Related
I'm using AWS Lambda as root account. but when I try to add dynamo-db as trigger in lambda, AWS said some authority errors occurred.
Please ensure the role can perform the GetRecords, GetShardIterator, DescribeStream, ListShards, and ListStreams Actions on your stream in IAM.
I'm using root account, why authority error occurred?
I want to use root account
i'm using root account, why authority error occurred? i want to use root account
Your functions, uses lambda execute role, your IAM user/root permissions do not apply here. You have to updated the execution role with DyndamoDB permissions.
Lambda functions used execution role to access AWS services and resources, this can be set in the lambda creation wizard or in the cloud formation script
Step 1.
Role: !GetAtt DeleteAppConfigurationsLambdaRole.Arn . Details [here][1].
example.
Lets create a Dynamodb Table as below by CFN script with stream enabled.
DynamoDBTable:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: "id"
AttributeType: "S"
KeySchema:
-
AttributeName: "id"
KeyType: "HASH"
TableName: DynamoDBTable
SSESpecification:
SSEEnabled: true
StreamSpecification:
StreamViewType: "NEW_AND_OLD_IMAGES"
Then create a lambda execution role which has access to the stream as below,
DynamoDBStreamLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- sts:AssumeRole
Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Version: '2012-10-17'
Path: /
RoleName: "IAM-ROLE-DynamoDBStreamLambdaRole"
Policies:
- PolicyDocument:
Statement:
- Action:
- dynamodb:DescribeStream
- dynamodb:GetRecords
- dynamodb:GetShardIterator
- dynamodb:ListStreams
Effect: Allow
Resource: !GetAtt DynamoDBTable.StreamArn
Version: '2012-10-17'
PolicyName: "IAM-POLICY-DynamoDBStreamLambdaStreamaccess"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Then you can attach this role to the lambda as described in step 1.
[1]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html#cfn-lambda-function-role
We have a simple serverless application that contains a dynamo DB table, a few lambdas and an API endpoint. We currently have the app deployed in the dev stage.
We are having some trouble deploying to the prod stage.
Here is the serverless.yaml file.
service: lookups
# app and org for use with dashboard.# serverless.com
app: lookups
org: xxxxxx
provider:
name: aws
runtime: python3.8
environment:
DYNAMO_DB_LOOKUP_TABLE_NAME: lookup_${self:provider.stage}
S3_BUCKET: com.yyyyy.lookups.${self:provider.stage}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:GetItem
Resource: "arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.DYNAMO_DB_LOOKUP_TABLE_NAME}"
functions:
createOrUpdateLookups:
handler: createOrUpdateLookups.createOrUpdateLookups
description: create or update lookup entry in dynamodb
environment:
lookupTable: ${self:provider.environment.DYNAMO_DB_LOOKUP_TABLE_NAME}
events:
- s3:
bucket: ${self:provider.environment.S3_BUCKET}
event: s3:ObjectCreated:*
rules:
- suffix: .json
getLookup:
handler: getLookup.getLookup
description: get persigned url for a lookup by location and lookup type
environment:
lookupTable: ${self:provider.environment.DYNAMO_DB_LOOKUP_TABLE_NAME}
lookupBucket: ${self:provider.environment.S3_BUCKET}
events:
- http:
path: v1/lookup
method: get
request:
parameters:
querystrings:
location: true
lookupType: true
resources:
Resources:
lookupTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: ${self:provider.environment.DYNAMO_DB_LOOKUP_TABLE_NAME}
AttributeDefinitions:
- AttributeName: location
AttributeType: S
- AttributeName: lookup
AttributeType: S
KeySchema:
- AttributeName: location
KeyType: "HASH"
- AttributeName: lookup
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
We deployed this to the dev stage using the following cli command:
serverless deploy
This created a stack in CloudFormtion called lookups-dev, a DymanoDB table DB table called lookup-dev and lambdas called lookups-dev-createOrUpdateLookups and lookups-dev-getLookup .
Now when try to deploy to a new stage called prod using this cli command
serverless deploy --stage prod
We get an error saying the table lookups-dev already exists in the stack with the stack id of the lookups-dev stack.
This is the full error:
An error occurred: lookupTable - lookup_dev already exists in stack arn:aws:cloudformation:us-east-1:aaaaabbbbbbbccccccdddddd:stack/lookups-dev/wwwwwww-wwwwwww-wwwwwwaws.
Question:
How do we deploy to a new stage when we have already deployed out app the dev stage.
What has happened is that you need to make sure that the name of the table changes on different stages. I see you use ${self:provider.stage} to try and do this but all that does is use the value for stage under the provider section and because you haven't set one, it uses the default of dev always. I would suggest adding the following line under providers so that you have something like this:
provider:
stage: ${opt:stage, 'dev'}
What this means is that if you pass the stage on the CLI using --stage it will set the provider.stage to that value or to the default of dev.
I have an IAM role in my current CFN template, but I dont have permission to directly create IAM in this account so I need to convert this to a service catalog code in my template:
Here is the original code:
MongoDBRole:
Type: 'AWS::IAM::Role'
Properties:
ManagedPolicyArns:
- arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- 'ec2.amazonaws.com'
Action:
- 'sts:AssumeRole'
Tags:
- Key: name
Value: role-mongodb
- Key: env
Value: !Ref TagEnvironment
- Key: sme
Value: dba
And this is what I tried
MongoDBRole:
Type: AWS::ServiceCatalog::CloudFormationProvisionedProduct
Properties:
ProductName: IAMRole
ProvisioningArtifactName: 1.0.9
ProvisioningParameters:
- Key: RoleNameSuffix
Value: MongoRole
- Key: AssumingServices
Value: ec2.amazonaws.com
- Key: ManagedPolicyArns
Value: arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
This is the error:AWS::ServiceCatalog::CloudFormationProvisionedProduct CREATE_FAILED Model validation failed (#/Tags/0/Value: failed validation constraint for keyword [pattern])
I am not confident I created this the right and I am pretty new to cloudformation and moreso service catalog. How can I rectify this?
To use Service Catalog you need to:
create a portfolio (AWS::ServiceCatalog::Portfolio)
create a product (AWS::ServiceCatalog::CloudFormationProduct)
associate product with portfolio (AWS::ServiceCatalog::PortfolioProductAssociation)
provision the product (AWS::ServiceCatalog::CloudFormationProvisionedProduct)
In step 2. when you create a product you need to pass the template that you want to deploy, in your case the template for the IAM role
I am trying to create a simple CloudFormation Stack, but it does not work. Here is my CloudFormation template.
Resources:
MyDBSecrets:
Type: AWS::SecretsManager::Secret
Properties:
Description: 'This is password of mysql database'
GenerateSecretString:
PasswordLength: 16
ExcludePunctuation: true
Name: MyDBSecrets
MyDBInstance:
Type: AWS::RDS::DBInstance
Properties:
DBName: MyDBInstance
AllocatedStorage: '20'
DBInstanceClass: db.t3.micro
Engine: mysql
MasterUsername: 'testdb'
MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref MyDBSecrets, ':SecretString}}' ]]
SecretRDSInstanceAttachment:
Type: "AWS::SecretsManager::SecretTargetAttachment"
Properties:
SecretId: !Ref MyDBSecrets
TargetId: !Ref MyDBInstance
TargetType: AWS::RDS::DBInstance
On stack creation I can see my secrect resource is created, my RDS instance is also created, but on SecretTargetAttachment I am getting CREATE_FAILED with 'SecretString is not valid JSON' error. Am I missing something ?
Secrets Manager typically store secrets in one of two formats:
as part of a JSON object { "user": "master", "password": "password123" }
or as a plain test secret, e.g. password123
The Secrets Manager documentation recommends the JSON version, and their samples all use it. In CloudFormation this can be generated using SecretStringTemplate, and a password field (for example) can be extracted using :SecretText:password after the secret ARN in a dynamic field interpolation.
Furthermore this format appears to be required by AWS::SecretsManager::SecretTargetAttachment as it stores the RDS instance in another field in the JSON object. This is the cause of your error.
A word of warning if you are using this with ECS: you should not use a dynamic interpolation in the Task Definition as this will be saved in plain text for anyone to read from the console/cli. Instead you should use a Secrets section with a ValueFrom the secrets manager. This, unfortunately, does not currently appear to support extracting fields from the JSON blob. Instead you will have to parse the JSON blob within your docker container.
Reference Pattern for Secrets Manager secrets, the reference-key segment is composed of several segments, including the secret id, secret value key, version stage, and version id.
Use the following pattern:{{resolve:secretsmanager:secret-id:secret-string:json-key:version-stage:version-id}}
secret-id: The name or Amazon Resource Name (ARN) that serves as a unique identifier for the secret.To access a secret in your AWS account, you need only specify the secret name. To access a secret in adifferent AWS account, specify the complete ARN of the secret.Required.
secret-string: Currently, the only supported value is SecretString. The default is SecretString.
json-key: Specifies the key name of the key-value pair whose value you want to retrieve. If you do not specifya json-key, CloudFormation retrieves the entire secret text.This segment may not include the colon character ( : ).
version-stage: Specifies the secret version that you want to retrieve by the staging label attached to the version.Staging labels are used to keep track of different versions during the rotation process. If you use version-stage then don't specify version-id. If you don't specify either a version stage or aversion ID, then the default is to retrieve the version with the version stage value of AWSCURRENT.This segment may not include the colon character ( : ).
version-id: Specifies the unique identifier of the version of the secret that you want to use in stack operations.If you specify version-id, then don't specify version-stage. If you don't specify either a versionstage or a version ID, then the default is to retrieve the version with the version stage value ofAWSCURRENT.This segment may not include the colon character ( : ).
For more info go here.
I found the solution to my problem it was not straightforward. I contacted to AWS support team and they helped me to resolve it. AWS support team recommended using macros for custom processing on template.
Step1 : Macro.yml
AWSTemplateFormatVersion: 2010-09-09
Resources:
TransformExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
Policies:
- PolicyName: root
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: ['logs:*']
Resource: 'arn:aws:logs:*:*:*'
- Effect: Allow
Action: ['s3:*']
Resource: '*'
TransformFunction:
Type: AWS::Lambda::Function
Properties:
Code:
ZipFile: |
import traceback
def handler(event, context):
response = {
"requestId": event["requestId"],
"status": "success"
}
try:
paramPassword= event["params"]["paramPassword"]
Description= event["fragment"]["Description"]
Name= event["fragment"]["Name"]
print(event)
print("starting macro execution")
fragment = {}
fragment['Name'] = Name
fragment['Description'] = Description
if paramPassword == "":
fragment['GenerateSecretString'] = {}
fragment['GenerateSecretString']['PasswordLength'] = 16
fragment['GenerateSecretString']['ExcludePunctuation'] = 'true'
else:
fragment['SecretString'] = {}
fragment['SecretString']['Ref'] = "paramPassword"
print(fragment)
response["fragment"] = fragment
print(response)
except Exception:
traceback.print_exc()
response["status"] = "failure"
macro_response["errorMessage"] = str(e)
return response
Handler: index.handler
Runtime: python3.6
Role: !GetAtt TransformExecutionRole.Arn
TransformFunctionPermissions:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !GetAtt TransformFunction.Arn
Principal: 'cloudformation.amazonaws.com'
Transform:
Type: AWS::CloudFormation::Macro
Properties:
Name: 'SecretManager'
Description: To check for secret's default value and conditionally create secret
FunctionName: !GetAtt TransformFunction.Arn
Step 2: template.yml
Parameters:
paramPassword:
Type: String
Default: test
Description: Enter the default value of SecretString
Resources:
LambdaIAMRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: root
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 's3:*'
Resource: '*'
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: 'arn:aws:logs:*:*:*'
Resources:
TestSecrets:
Type: AWS::SecretsManager::Secret
Properties:
'Fn::Transform':
- Name: SecretManager
Parameters:
paramPassword: !Ref paramPassword
Description: 'This is my password'
Name: 'my-secret-password2'
So I am creating a server less application on Amazon AWS using the Serverless Framework.
For our stack, we create a number of Lambda functions, DynamoDB table, API Gateway and now we want to add a simpleDB domain as well.
I cannot seem to find any information online on what code snippet to add to serverless.yaml to create a SimpleDB domain.
I wrote the following code, which creates the domain but the name of the domain is not as expected
resources:
Resources:
LogSimpleDBTable:
Type: "AWS::SDB::Domain"
Properties:
DomainName : ${self:provider.environment.SIMPLEDB}
Description: "SDB Domain to store data log"
And the variable SimpleDB is defined as
SIMPLEDB: git-pushr-processing-${opt:stage, self:provider.stage}
So when I deploy using the command
serverless deploy --stage staging --awsaccountid XXXXX
I expect the name of the SimpleDB table to be
git-pushr-processing-staging
instead I get a domain with the following name
git-pushr-api-staging-LogSimpleDBTable-1P7CQH4SGAWGI
Where the last bit of sequence (1P7CQH4SGAWGI) varies every time.
We are using the exact same pattern to name our DynamoDB tables and they seem to get created with correct name
DYNAMODB_TABLE: git-pushr-processing-${opt:stage, self:provider.stage}
resources:
Resources:
TodosDynamoDbTable:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.DYNAMODB_TABLE}
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
We get a DynamoDB table with the following name
git-pushr-processing-staging
So what am I doing wrong here ?
I don't know how to make serverless use the domain name of your choice.
However, it is possible to reference the domain generated using Ref: LogSimpleDBTable syntax
E.g. to pass the domain name to lambda (making it available as process.env.SDB_DOMAIN_NAME variable):
functions:
queueRequests:
handler: src/consumer.handler
name: consumer
environment:
SDB_DOMAIN_NAME:
Ref: LogSimpleDBTable
Or reference it in IAM role statements
provider:
...
iamRoleStatements:
- Effect: Allow
Action:
- sdb:GetAttributes
- sdb:PutAttributes
Resource:
Fn::Join:
- ""
- - "arn:aws:sdb:*:*:domain/"
- Ref: LogSimpleDBTable