CodePipeline Deploying SAM Template Error With Added Action - amazon-web-services

I have a SAM template with the resources to create a lambda function and an api gateway. The template is saved along with the code for the lambda function and the buildspec.yaml file. When I run the code through codepipeline without the api gateway resources the SAM template is transformed then deployed successfully. When I include the resources necessary to create the api gateway I am presented with the following error upon creation:
AccessDenied. User doesn't have permission to call apigateway:GetResources
when I look at the policy attached to the cloudformation role I have the following:
Actions:
- apigateway:DELETE
- apigateway:GetResources
- apigateway:GetRestApis
- apigateway:POST
Effect: Allow
Resource: !Sub "arn:${AWS::Partition}:apigateway:*::/*"
The action has apigateway:GetResources defined yet it still fails. When I permit all api gateway actions the template is successfully deployed by codepipeline and cloudformation. That is if I have the following statement:
Actions:
- apigateway:*
Effect: Allow
Resource: !Sub "arn:${AWS::Partition}:apigateway:*::/*"
Question: Is it possible to have codepipeline with cloudformation create an api gaetway without providing the catchall(*) api gateway actions?

There are no such actions in API gateway IAM policies like:
- apigateway:GetResources
- apigateway:GetRestApis
The API gateway permissions have the form of:
apigateway:HTTP_VERB
So you probably need GET:
Actions:
- apigateway:DELETE
- apigateway:GET
- apigateway:POST

Related

Not authorized to perform: lambda:GetFunction

I am trying to deploy lambda function with Serverless framework
I've added my ADMIN credentials in the aws cli
and I am getting this error message every time I try to deploy
Warning: Not authorized to perform: lambda:GetFunction for at least one of the lambda functions. Deployment will not be skipped even if service files did not change.
Error:
CREATE_FAILED: HelloLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "null (Service: Lambda, Status Code: 403, Request ID: ********)" (RequestToken: ********, HandlerErrorCode: GeneralServiceException)
I've also removed everything from my project and from the YML file and nothing worked
service: test
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs12.x
iam:
role:
statements:
- Effect: "Allow"
Action:
- lambda:*
- lambda:InvokeFunction
- lambda:GetFunction
- lambda:GetFunctionConfiguration
Resource: "*"
functions:
hello:
handler: handler.hello
Deployments default to us-east-1 region and used the default profile set on the machine where the serverless command is run. Perhaps you dont have permission to deploy is that region or serverless is using a different profile than intended. (e.g If i run serverless from an EC2 and login separately, it would still use the default profile, i.e the EC2 instance Profile.)
Can you update your serverless.yml file to include the region as well.
provider:
name: aws
runtime: nodejs12.x
region: <region_id>
profile: profile name if not Default
When I tried to create a lambda function manually from the AWS website I found that I've no permission to view or create any lambda function
And after that I found that my account was suspended due to a behavior I've done that is not acceptable in AWS policy
I've followed the steps the support has sent me and then my account was back and everything worked fine

How can I specify apigateway's role to give permission to invoke a lambda?

I am using AWS apigateway to trigger a lambda function. I deployed them from serverless framework, the configuration looks like:
handler:
handler: src/index.handler
name: handler
tracing: true
role: updateRole
events:
- http:
path: /contact/{id}
method: patch
integration: lambda
request:
parameters:
paths:
id: true
after deploy, it works perfect. But what I don't understand is how I can find out where the iam role/policy defined for this API integration?
When open AWS console, it shows me the right configuration in the "Integration Request" tab:
But I can't find anywhere it specifies the IAM role to this integration. How can I find it or update it?
Permissions to execute a function from API, are set using resource-based policies for lambda, not IAM role. In lambda console they are listed as:

How can I provide resource-based policy in my lambda via serverles.yml?

I am using serverless.yml to deploy lambdas to AWS and I'd like to know how to configure the resource-based policy for my lambda.
I deploy a customised alias to my lambda and need to grant invoke:lambda in the policy of the resouce-based policy. So when you open lambda -> configuration -> permission, the policy should appear as below
when I use the role configure in serverless.yml, it only changes the permission for my lambda execution role. How can I modify the Resource-based policy for my lambda?
I have used before the API Gateway Resource Policy:
https://www.serverless.com/framework/docs/providers/aws/events/apigateway/#resource-policy
For the lambda function association directly you can take a look at that thread:
https://github.com/serverless/serverless/issues/4926
An example serverless.yaml would look like this:
provider:
name: aws
runtime: nodejs8.10
memorySize: 128
stage: dev
apiGateway:
resourcePolicy:
- Effect: Allow
Principal: '*'
Action: execute-api:Invoke
Resource:
- execute-api:/*/*/*
Condition:
IpAddress:
aws:SourceIp:
- 'your ip here'
How to restrict access to a lambda
Please note that the resource policy currently only works for the REST API Gateways. https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html
HTTP APIs do not support resource policies.

How can I add database proxy in lambda via cloudformation?

I am using cloudformation to provision lambda and RDS on AWS. But I don't know how to add database proxy on lambda. Below screenshot is from lambda console:
Does cloudformation support adding this? I can't see it in lambda and db proxy template.
The exact configuration I use in CloudFormation template is:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- rds-db:connect
Resource:
- <rds_proxy_arn>/*
where <rds_proxy_arn> is the ARN of the proxy but service is rds-db instead of rds and resource type is dbuser instead of db-proxy. For example, if your proxy's ARN is arn:aws:rds:us-east-1:123456789012:db-proxy:prx-0123456789abcdef01 the whole line should be arn:aws:rds-db:us-east-1:123456789012:db-proxy:prx-0123456789abcdef01/*.
After deployed, we can see a new link is added in Database Proxies of the Console.
As per the CloudFormation/Lambda documentation there is no option to specify the DB Proxy for a Lambda.
I don't see an option to add an RDS proxy while creating a Lambda function in the low level HTTP API also. Not sure why.
As per the following Github issue, it seems this is not required to connect lambda to RDS proxy. https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/750
You merely need to provide the new connection details to lambda (e.g. using env variables to make it work)
After talking with AWS support, the screenshot in AWS console to add proxy on lambad is only to grant below IAM permission to lambda. That means it is an optional.
Allow: rds-db:connect
Allow: rds-db:*

How to avoid giving `iam:CreateRole` permission when using existing S3 bucket to trigger Lambda function?

I am trying to deploy an AWS Lambda function that gets triggered when an AVRO file is written to an existing S3 bucket.
My serverless.yml configuration is as follows:
service: braze-lambdas
provider:
name: aws
runtime: python3.7
region: us-west-1
role: arn:aws:iam::<account_id>:role/<role_name>
stage: dev
deploymentBucket:
name: serverless-framework-dev-us-west-1
serverSideEncryption: AES256
functions:
hello:
handler: handler.hello
events:
- s3:
bucket: <company>-dev-ec2-us-west-2
existing: true
events: s3:ObjectCreated:*
rules:
- prefix: gaurav/lambdas/123/
- suffix: .avro
When I run serverless deploy, I get the following error:
ServerlessError: An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::<account_id>:assumed-role/serverless-framework-dev/jenkins_braze_lambdas_deploy is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH.
I see some mentions of Serverless needing iam:CreateRole because of how CloudFormation works but can anyone confirm if that is the only solution if I want to use existing: true? Is there another way around it except using the old Serverless plugin that was used prior to the framework adding support for the existing: true configuration?
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that Serverless will try to create a new IAM role every time I try to deploy the Lambda function?
I've just encountered this, and overcome it.
I also have a lambda for which I want to attach an s3 event to an already existing bucket.
My place of work has recently tightened up AWS Account Security by the use of Permission Boundaries.
So i've encountered the very similar error during deployment
Serverless Error ---------------------------------------
An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/xx-crossaccount-xx/aws-sdk-js-1600789080576 is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::XXXXXXXXXXXX:role/my-existing-bucket-IamRoleCustomResourcesLambdaExec-LS075CH394GN.
If you read Using existing buckets on the serverless site, it says
NOTE: Using the existing config will add an additional Lambda function and IAM Role to your stack. The Lambda function backs-up the Custom S3 Resource which is used to support existing S3 buckets.
In my case I needed to further customise this extra role that serverless creates so that it is also assigned the permission boundary my employer has defined should exist on all roles. This happens in the resources: section.
If your employer is using permission boundaries you'll obviously need to know the correct ARN to use
resources:
Resources:
IamRoleCustomResourcesLambdaExecution:
Type: AWS::IAM::Role
Properties:
PermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
Some info on the serverless Resources config
Have a look at your own serverless.yaml, you may already have a permission boundary defined in the provider section. If so you'll find it under rolePermissionsBoundary, this was added in I think version 1.64 of serverless
provider:
rolePermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
If so, you can should be able to use that ARN in the resources: sample I've posted here.
For testing purpose we can use:
provider:
name: aws
runtime: python3.8
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
For running sls deploy, I would suggest you use a role/user/policy with Administrator privileges.
If you're restricted due to your InfoSec team or the like, then I suggest you have your InfoSec team have a look at docs for "AWS IAM Permission Requirements for Serverless Framework Deploy." Here's a good link discussing it: https://github.com/serverless/serverless/issues/1439. At the very least, they should add iam:CreateRole and that can get you unblocked for today.
Now I will address your individual questions:
can anyone confirm if that is the only solution if I want to use existing: true
Apples and oranges. Your S3 configuration has nothing to do with your error message. iam:CreateRole must be added to the policy of whatever/whoever is doing sls deploy.
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that serverless will try to create a new role every time I try to deploy the function?
Yes, it is a random identifier
No, sls will not create a new role every time. This unique ID is cached and re-used for updates to an existing stack.
If a stack is destroyed/recreated, it will assign a generate a new unique ID.