How can I configure database proxy on lambda in serverless.yml? - amazon-web-services

I am using serverless framework to provision infrastructure on AWS. And I need to add database proxy on my lambda but I couldn't find how to configure that. I have read the doc https://www.serverless.com/framework/docs/providers/aws/guide/functions/ but it doesn't mention anything relate to database proxy.
Below screenshot is the bottom of lambda in aws console. How can I add the proxy via serverless.yml?

Go to your RDS server and click on the proxy
Copy the Proxy ARN
Edit your serverless.yml and
provider:
name: aws
iam:
role:
statements:
- Effect: "Allow"
Action:
- "rds-db:connect"
Resource: "arn:aws:rds-db:us-east-1:123123123:admin:blah-123abc123abc/*"
Note that "rds" in the ARN was changed to "rds-db" and "db-proxy" in the ARN was changed to "admin" (the admin user of the database). sls deploy and check the lambda. You should see the proxy in the database proxy configuration section.
For example, the Proxy ARN I copied from RDS was
arn:aws:rds:us-east-1:123123123:db-proxy:blah-123abc123abc
and I edited it to be
arn:aws:rds-db:us-east-1:123123123:admin:blah-123abc123abc/*
Also, be sure your lambda is in the same Vpc as the RDS proxy or it will not be able to connect.
I am using:
% sls --version
Framework Core: 2.50.0
Plugin: 5.4.3
SDK: 4.2.3
Components: 3.13.2

Related

CodePipeline Deploying SAM Template Error With Added Action

I have a SAM template with the resources to create a lambda function and an api gateway. The template is saved along with the code for the lambda function and the buildspec.yaml file. When I run the code through codepipeline without the api gateway resources the SAM template is transformed then deployed successfully. When I include the resources necessary to create the api gateway I am presented with the following error upon creation:
AccessDenied. User doesn't have permission to call apigateway:GetResources
when I look at the policy attached to the cloudformation role I have the following:
Actions:
- apigateway:DELETE
- apigateway:GetResources
- apigateway:GetRestApis
- apigateway:POST
Effect: Allow
Resource: !Sub "arn:${AWS::Partition}:apigateway:*::/*"
The action has apigateway:GetResources defined yet it still fails. When I permit all api gateway actions the template is successfully deployed by codepipeline and cloudformation. That is if I have the following statement:
Actions:
- apigateway:*
Effect: Allow
Resource: !Sub "arn:${AWS::Partition}:apigateway:*::/*"
Question: Is it possible to have codepipeline with cloudformation create an api gaetway without providing the catchall(*) api gateway actions?
There are no such actions in API gateway IAM policies like:
- apigateway:GetResources
- apigateway:GetRestApis
The API gateway permissions have the form of:
apigateway:HTTP_VERB
So you probably need GET:
Actions:
- apigateway:DELETE
- apigateway:GET
- apigateway:POST

How can I add database proxy in lambda via cloudformation?

I am using cloudformation to provision lambda and RDS on AWS. But I don't know how to add database proxy on lambda. Below screenshot is from lambda console:
Does cloudformation support adding this? I can't see it in lambda and db proxy template.
The exact configuration I use in CloudFormation template is:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- rds-db:connect
Resource:
- <rds_proxy_arn>/*
where <rds_proxy_arn> is the ARN of the proxy but service is rds-db instead of rds and resource type is dbuser instead of db-proxy. For example, if your proxy's ARN is arn:aws:rds:us-east-1:123456789012:db-proxy:prx-0123456789abcdef01 the whole line should be arn:aws:rds-db:us-east-1:123456789012:db-proxy:prx-0123456789abcdef01/*.
After deployed, we can see a new link is added in Database Proxies of the Console.
As per the CloudFormation/Lambda documentation there is no option to specify the DB Proxy for a Lambda.
I don't see an option to add an RDS proxy while creating a Lambda function in the low level HTTP API also. Not sure why.
As per the following Github issue, it seems this is not required to connect lambda to RDS proxy. https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/750
You merely need to provide the new connection details to lambda (e.g. using env variables to make it work)
After talking with AWS support, the screenshot in AWS console to add proxy on lambad is only to grant below IAM permission to lambda. That means it is an optional.
Allow: rds-db:connect
Allow: rds-db:*

Is there any way to tag all resources through serverless?

I am adding tags using serverless and my service also using other resources e.g. kinesis. Is there any way to add tags in kinesis through serverless?
You can use a serverless plugin (serverless-plugin-resource-tagging). it will tag your Lambda function, Dynamo Tables, Bucket, Stream, API Gateway, and CloudFront resources. The way it works is you have to provide stacksTags having your tags inside under the Provider section of serverless.
provider:
stackTags:
STACK: "${self:service}"
PRODUCT: "Product Name"
COPYRIGHT: "Copyright"
You can add tags to all resources that are created using serverless.
In serverless.yml, add the following under provider section -
provider:
name: aws
runtime: {your runtime}
region: {your region}
stackTags: ${file(config/tags.yml):tags}
tags: ${file(config/tags.yml):tags}
Note -
1. tags - will add tags information to all functions.
2. stackTags - will add tags information to all other resources generated by CloudFormation template.

How can I nest a serverless Step Function / State Machine / Lambda build into an existing AWS CloudFormation ElasticBeanstalk Application?

I have written a service using AWS Step Functions. I would like to integrate this into our applications existing Elastic Beanstalk development process, wherein we have distinct dev, staging, and production applications. Each of these stages have app-specific environment variables, which I would like to pull into my Lambda functions as well.
I am not presently using SAM but I can port over if necessary to accomplish this.
The following is a simplified configuration mirroring my serverless.yml file.
service:
name: small-service
plugins:
- serverless-webpack
- serverless-step-functions
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-2
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: { "Fn::Join": ["", ["arn:aws:s3:::S3-bucket-name", "/*" ] ] }
functions:
connect:
handler: handler.connect
stepFunctions:
stateMachines:
smallService:
name: small-service-${self:provider.stage}
definition:
Comment: Service that connects to things
StartAt: Connect
States:
Connect:
Type: Task
Resource: arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-${self:provider.stage}-connect
End: true
How can I dynamically deploy the step functions into different beanstalk applications? How can I access ElasticBeanstalk environment properties from within Step Functions?
Is there a better way to import environment .env variables dynamically into a Serverless application outside of EB? We are integrating the service into a larger AWS applications development workflow, is there is a more "serverless" way of doing this?
Move your environment variables into SSM Parameter Store. Then you can either
reference SSM parameters in your serverless.yaml, or
fetch SSM parameters in the beginning of each Lambda invocation (see e.g. here)
Note that the former method requires re-deploying your Lambda to receive the latest SSM parameters whereas the latter always fetches the latest parameter values.

WARNING No validation for the AWS provider has been implemented

I believe I might be missing a piece here,
I've added the aws account.
hal config provider aws account add spinnakermaster \
--account-id XXXXXXXXXXXX --asume-role role/spinnakerManaged
I've added the credentials for the AWS User.
hal config provider aws edit --access-key-id XXXXXXXXXXXXXXXXXXXX --secret-access-key
And prompted to its corresponding secret-access-key.
I've edited in the .hal directory the config file:
aws:
enabled: false
accounts:
- name: spinnakermaster
requiredGroupMembership: []
accountId: 'ZZZZZZZZZZZZZZZZZZ'
regions: []
assumeRole: role/spinnakerManaged
primaryAccount: spinnakermaster
accessKeyId: XXXXXXXXXXXXXXXXXXXX
secretAccessKey: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: Canada
defaults:
iamRole: BaseIAMRole
And I am deploying Spinnaker with AWS support which execute with one hiccup:
Problems in default.provider.aws.spinnakermaster:
- WARNING No validation for the AWS provider has been
implemented.
Which step/info/config am I missing.
Regards
EN
updated. This warning is OK and will not affect your executions.
My suggestions after installing Spinnaker in EC2 local debian, Azure AKS and Minnaker on EC2.
Please dont install a microservice architecture in a monolith environment such as local Debian. It doesnt work
At All Cost Focus on the correct AWS Managed and Managing IAM structure. Please Follow Armory Spinnaker instructions on how to achieve this Armory IAM structure
Previous misleading answer: As of Now Spinnaker version 1.16.4 and based on the official documentation. There are 2 ways to manage the AWS infrastructure:
with aWS key and secret
with IAM role attached to the AWS EC2 instance running the spinnaker.
This error usually comes up when halyard cannot recognize the Key and secret for the corresponding account. Check halyard Code Documentation
One way to resolve it depending on your deployment type is adding an AWS account with the corresponding Key and Secret values. Check Halyard add-account
Documentation AWS Cloud Provider