I am creating a s3 listener using serverless framework. A user has requested a specific file format, for the s3 trigger for the event.
I currently have
functions:
LambdaTrigger:
name: ${self:service}-${self:custom.environment}
description: lambda_trigger
handler: handler.lambda_handler
tags:
project: ""
owner: ""
environment: ${self:custom.environment}
events:
- existingS3:
bucket: ${self:custom.listener_bucket_name}
event: s3:ObjectCreated:*
rules:
- prefix: ${self:custom.listener_prefix}
- suffix: ${self:custom.listener_suffix}
my user is requesting that lambda only be triggered when file is of
/ID1_ID2_ID3.tar
I have handled the prefix and the suffix condition in the above function, but I am wondering how or even if it is possible to construct a new rule to only be triggered when file has the format of ID1_ID2_ID3 where each id N integers.
According to the docs, an earlier question and my own experience in the matter, prefix and suffix parameters cannot be wildcard or regular expressions, so I'm affraid that unless you find some clever way to circunvent the restriction it's not possible to do what you want.
Related
Fairly new to Serverless and am having problems creating a dynamic path to an SSM parameter..... I have tried a fair few ideas but am sure that this is really close but its not quite there....
I'm trying to generate an ssm path as a custom variable that will then be used to populate a value for a Lambda function.
Here's the custom variable code
custom
securityGroupSsmPath:
dev: "${self:service}/${self:custom.stage}/rds/lambdasecuritygroup"
other: "${self:service}/${env:SHARED_INFRASTRUCTURE_ENV}/rds/lambdasecuritygroup"
securityGroupId: ${ssm:, "${self:custom.securityGroupSsmPath.${env:SHARED_INFRASTRUCTURE_ENV}, self:custom.securityGroupSsmPath.other}"}
And here is where it is referenced in the function
functions:
someLambda:
handler: build/handlers/someLambda/handler.handler
timeout: 60
memorySize: 256
vpc:
securityGroupIds:
- ${self:custom.securityGroupId}
And here is the error output. It seems like it is not resolving the ssm parameter
Serverless Error ----------------------------------------
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "custom.securityGroupId": Parameter name: can't be prefixed with "ssm" (case-insensitive). If formed as a path, it can consist of sub-paths divided by slash symbol; each sub-path can be formed as a mix of letters, numbers and the following 3 symbols .-_
All help much appreciated,
Thanks!
Sam
In the end we tried numerous implementations and the issue seemed to boil down to trying to both retrieve the ssm value for securityGroupId and also parse and default the second variable within it.
The solution ended up being as follows where we removed the parsing/default variable from within ssm step. Additionally we had to remove some of the double quotes on the custom vars:-
custom
securityGroupSsmPath:
dev: ${self:service}/${self:custom.stage}/rds/lambdasecuritygroup
other: ${self:service}/${env:SHARED_INFRASTRUCTURE_ENV}/rds/lambdasecuritygroup
securityGroupId: ${self:custom.securityGroupSsmPath.${env:SHARED_INFRASTRUCTURE_ENV}, self:custom.securityGroupSsmPath.other}
functions:
someLambda:
handler: build/handlers/someLambda/handler.handler
timeout: 60
memorySize: 256
vpc:
securityGroupIds:
- ${ssm:/${self:custom.securityGroupId}}
I have a dynamodb table that has a primary key of CompanyName and sort Key of CognitoUserID. I have a REST API and created an update method to update a user in my table. The issue I am having is that in my Yaml template I need to provide both the primary and sort key for my path parameters but I am only able to provide one. The code below is of my YAML template
Type: AWS::Serverless::Function
Properties:
CodeUri: cloudPortalFunctions/
Handler: app.updateUserProfile
Layers:
- !Ref updateUserProfileDepLayer
Runtime: nodejs14.x
Architectures:
- x86_64
Events:
updateUserProfile:
Type: Api
Properties:
Path: /updateUserProfile/{cognitoUserID}
Method: PUT
This is my update method in my YAML file
I would like to be able to add CompanyName to the path. Maybe to look like this
Path: /updateUserProfile/{companyName}{cognitoUserID}
I have tried this How can i use multiple path parameters from serverless framework but none of it is accepted in my vs code and the AWS documentation does not help either
Is there a way to do this?
I just needed to delete the stack and deploy it from scratch
There is a requirement to archive files inside a bucket folder (i.e. put under prefix) for those files having last modified date exceeding a particular time (say 7 days) to a subfolder with date as the prefix:
Sample folder structure:
a.txt
b.txt
20210826
c.txt (with last modified date over 1 week)
20210819
d.txt (with last modified date over 2 weeks)
Any idea how this can be achieved? It seems there's no readily-available archiving policy to achieve this.
The only way I can think of is through a lambda function (with scheduler trigger) to :
Scan all the files timestamp to see which are older than 1 week
Move the matched files to under a prefix (e.g. 20210826/c.txt)
Another question is about purging. If files are put under a date prefix, how can we configure the LifecycleConfiguration Rule in the CloudFormation template?
LifecycleConfiguration:
Rules:
- Id: DeletionRule
Prefix: '' (how to set it to cater for different dates as the key)
Status: Enabled
ExpirationInDays: !FindInMap [EnvironmentsMap, !Ref env, S3FileRetentionIndays]
You can configure a lifecycle rule that transitions the objects after x time to a different storage class, and then, capture that operation using Event Bridge based on the CloudTrail API called. https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-ct-api-tutorial.html
Finally, Event Bridge triggers a Lambda function that moves the S3 object to a subfolder with the date prefix.
Regarding the expiration configuration, you'll need to create a parent folder for all the purged files and set that as the prefix.
Your Cloudformation template will look like this:
LifecycleConfiguration:
Rules:
- Id: DeletionRule
Prefix: 'purged-files/'
Status: Enabled
ExpirationInDays: !FindInMap [EnvironmentsMap, !Ref env, S3FileRetentionIndays]
I'm particular new to Lambda and to AWS in general. I'm trying to setup a simple REST API Service with Lambda. I've used CloudFormat and CodePipeline to have a simple Express app.
I'm trying to figure out why during the deployment phase, during ExecuteChangeSet I have this error:
Errors found during import: Unable to create resource at path '/stations/{stationId}/allowedUsers': A sibling ({id}) of this resource already has a variable path part -- only one is allowed Unable to create resource at path '/stations/{stationId}/allowedUsers/{userId}': A sibling ({id}) of this resource already has a variable path part -- only one is allowed
This is what I have inside the template.yml
Events:
AllowedUsers:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers
Method: get
AddAllowedUsers:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers
Method: post
DeleteAllowedUsers:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers/{userId}
Method: delete
GetAllowedUser:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers/{userId}
Method: get
I searched a bit for this error but I'm not sure how to solve it.
For me, the issue was different from what is described in the GitHub issue Bryan mentioned.
I was using two different parameter names. Finishing the refactoring and using a single id name fixed the issue.
Example:
DeleteAllowedUsers:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers/{id}
Method: delete
GetAllowedUser:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers/{userId}
Method: get
Here is the walk around for this problem. It was posted on github by pettyalex.
link :https://github.com/serverless/serverless/issues/3785
You might encounter this issue when updating a variable path while using serverless ( and serverless.yaml ) to provision the AWS gatewayApi, here is a walk-around:
comment out the endpoint function to remove it completely
uncomment and deploy again
I am using Serverless framework.
I have created Cognito user pool in one stack and importing it into second stack.
When I assign the value of the Cognito User pool ID created in the first stack, to the environment variable it works.
But when I try to use it while creating a ARN for Cognito Authorizer, it doesn't work.
I get an error - Trying to populate non string value into a string for variable
Here is the snippet of my serverless.yml file.
service: myservice
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: go1.x
stage: dev
region: us-east-1
cognitoUserPoolId :
Fn::ImportValue: cloudformation-resources-${self:provider.stage}-CognitoUserPool
cognitoAppClientId :
Fn::ImportValue: cloudformation-resources-${self:provider.stage}-CognitoUserPoolClient
custom:
environment:
COGNITO_USER_POOL_ID : ${self:provider.cognitoUserPoolId}
COGNITO_APP_CLIENT_ID: ${self:provider.cognitoAppClientId}
functions:
myfunction:
handler: bin/handlers/myfunction
package:
exclude:
- "**/**"
include:
- ./bin/handlers/myfunction
events:
- http:
path: mypath
method: put
authorizer:
name: cognitoapiauthorizer
arn: arn:aws:cognito-idp:#{AWS::Region}:#{AWS::AccountId}:userpool/${self:provider.cognitoUserPoolId}
cors: true
Any issues related to indentation or the way it is being used in another variable like ARN?
I see you're trying to resolve the error Trying to populate non string value into a string for variable. In my experience, this means that the variable is empty. What happens if you hardcode the cognitoUserPoolId at the end of the ARN? Is the error resolved? I suspect it would be. Moving forward from there, you should take a closer look at how you declare that variable. Your usage of Fn::ImportValue may not working as intended.
Also, I would definitely run your YAML through a validator. There are too many extra blank lines, and extra spaces before the colons e.g. COGNITO_USER_POOL_ID : ${self:provider.cognitoUserPoolId}. These may be causing problems. Keep your YAML formatting tidy.