This stack was working at one point... I'm not sure what's going on. This permission is no longer doing what it did before, or has become invalid.
I have a Lambda function that rotates a Secret, so naturally it must be triggered by Secrets Manager. So I built up the Permission as follows
import * as aws from '#pulumi/aws'
export const accessTokenSecret = new aws.secretsmanager.Secret('accessTokenSecret', {});
export const smPermission = new aws.lambda.Permission(`${lambdaName}SecretsManagerPermission`, {
action: 'lambda:InvokeFunction',
function: rotateKnacklyAccessTokenLambda.name,
principal: 'secretsmanager.amazonaws.com',
sourceArn: accessTokenSecret.arn,
})
And the Policy,
{
Action: [
'secretsmanager:GetResourcePolicy',
'secretsmanager:GetSecretValue',
'secretsmanager:DescribeSecret',
'secretsmanager:ListSecrets',
'secretsmanager:RotateSecret',
],
Resource: 'arn:aws:secretsmanager:*:*:*',
Effect: 'Allow',
},
Running pulumi up -y yields
aws:secretsmanager:SecretRotation (knacklyAccessTokenRotation):
error: 1 error occurred:
* error enabling Secrets Manager Secret "" rotation: AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. Ensure that the function policy grants access to the principal secretsmanager.amazonaws.com.
This error confuses me, because the Policy created for the Lambda will not accept the Principal param (which makes sense, the same behaviour happens in the AWS Console), so I'm sure they mean Permission instead of Policy.
Based on the log I can tell that the Permission is being created way after the Lambda/Secrets Manager is, I'm not sure if this is a Pulumi issue similar to how it destroys stacks in the incorrect order (Roles and Policies for example).
I can see the Permission in the AWS Lambda configuration section, so maybe it's ok?
Related
I have external-secrets operator v0.5.1 installed and working with a SecretStore for retrieve AWS parameter store. Also tried updating to V0.5.8
This is working fine with IRSA but if I try to create a external-secret for AWS secrets, with a new SecretStore, the SecretStore' status is Valid but the ExternalSecret that references this SecretStore got the following error: SecretSyncedError
AccessDeniedException: User: arn:aws:sts::12345678:assumed-role/eks-backend-role-pre/external-secrets-provider-aws is not authorized to perform: secretsmanager:GetSecretValue on resource: /backend/pre/PRE_PRIVPGPKEY because no identity-based policy allows the secretsmanager:GetSecretValue action status code: 400,
Please, note the STS is trying to use eks-backend-role-pre/external-secrets-provider-aws which it doesn't exist. The role which exist is eks-backend-role-pre I'm not sure who is adding the suffix external-secrets-provider-aws which invalidate the role name.
Both SecretStore, the one dedicated to AWS Parameter Store and the other that is dedicated to gather from AWS Secrets has the same service account associated.
Why is working one External Secret and the other using the same service account don't?
There was a typo in the policy.
As documentation shows, this is the correct way to declare a principal:
arn:${Partition}:secretsmanager:${Region}:${Account}:secret:${SecretId}
I have incorrect declared the principal:
"arn:aws:secretsmanager:eu-west-1:1234567890:secret/backend/pre/*"
Correct:
"arn:aws:secretsmanager:eu-west-1:1234567890:secret:/backend/pre/*"
In contrast to SSM Parameter store that you declare your principal "arn:aws:ssm:eu-west-1:1234567890:parameter/backend/pre/*"
, with SecretsManager you need to add a colon after the service :secret:
I have been trying to figure out how to create a highly restrictive IAM policy but seem to be running into issues.
We have a lambda that returns sensitive information. A few other lambdas in our system (currently only 3) will need the ability to invoke this handler directly with the lambda:InvokeFunction permission but we want to make it very explicit which functions have access.
Our goal is to have an explicit Deny IAM policy that whitelists the functions that should be granted access. This way, we can centrally manage the whitelist rather than relying on devs to create Allow policies for themselves.
I've tried the following however it seems to block access to everything, including the whitelisted ARN.
[
{
Effect: 'Allow',
Action: ['lambda:InvokeFunction'],
Resource: 'arn:aws:lambda:us-east-1:123456789:function:protected-function',
},
{
Effect: 'Deny',
Action: ['lambda:InvokeFunction'],
// the resource we are protecting
Resource: 'arn:aws:lambda:us-east-1:123456789:function:protected-function',
Condition: {
ArnNotLike: {
// the lambda that should have access to Invoke
'AWS:SourceArn': ['arn:aws:lambda:us-east-1:123456789:function:access-protected-data'],
},
},
}
]
What would be the best way to secure this function using IAM to ensure that we can have central management of permissions while still allowing our devs to deploy via a shared CI/CD IAM user that is responsible for provisioning the stack. Open to any ideas that help us secure the function - including protection against any possible internal bad actors/errors.
I'm stuck on a missing permissions issue trying to create a Lambda function.
The execution role I've configured has the following permissions:
$ aws --output=text iam get-role-policy --policy-name=MyRolePolicy --role-name=my-role
<snip>
POLICYDOCUMENT 2012-10-17
STATEMENT Allow
ACTION s3:Get*
ACTION s3:List*
ACTION logs:CreateLogGroup
ACTION logs:CreateLogStream
ACTION logs:PutLogEvents
ACTION ec2:DescribeNetworkInterfaces
ACTION ec2:CreateNetworkInterface
ACTION ec2:DeleteNetworkInterface
And when I create a Lambda function with that role, creation succeeds:
$ aws lambda create-function \
--function-name=my-test --runtime=java8 \
--role='arn:aws:iam::1234567890:role/my-role' \
--handler=MyHandler \
--code=S3Bucket=my-bucket,S3Key=app.zip
<result successful>
However, when I create the function using the same arguments (esp. the same execution role) I get the following error:
Boto3 Usage
client.create_function(
FunctionName=function_name,
Runtime='java8',
Role=getenv('execution_role_arn'),
Handler='MyHandler',
Code={
"S3Bucket": bucket,
"S3Key": artifact_name
},
Publish=True,
VpcConfig={
'SubnetIds': getenv('vpc_subnet_ids').split(','),
'SecurityGroupIds': getenv('vpc_security_group_ids').split(',')
}
)
Boto3 Result
{
'Error':{
'Message':'The provided execution role does not have permissions to call CreateNetworkInterface on EC2',
'Code':'InvalidParameterValueException'
},
'ResponseMetadata':{
'RequestId':'47b6640a-f3fe-4550-8ac3-38cfb2842461',
'HTTPStatusCode':400,
'HTTPHeaders':{
'date':'Wed, 24 Jul 2019 10:55:44 GMT',
'content-type':'application/json',
'content-length':'119',
'connection':'keep-alive',
'x-amzn-requestid':'47b6640a-f3fe-4550-8ac3-38cfb2842461',
'x-amzn-errortype':'InvalidParameterValueException'
},
'RetryAttempts':0
}
}
Creating a function via the console with this execution role works as well, so I must be missing something in how I'm using Boto3, but I'm at a loss to explain.
Hopefully someone can catch a misapplication of Boto3 here, cause I'm at a loss!
Your boto3 code is specifying a VPC:
VpcConfig={
'SubnetIds': getenv('vpc_subnet_ids').split(','),
'SecurityGroupIds': getenv('vpc_security_group_ids').split(',')
However, the CLI version is not specifying a VPC.
Therefore, the two requests are not identical. That's why one works and the other does not work.
From Configuring a Lambda Function to Access Resources in an Amazon VPC - AWS Lambda:
To connect to a VPC, your function's execution role must have the following permissions.
ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DeleteNetworkInterface
These permissions are included in the AWSLambdaVPCAccessExecutionRole managed policy.
The lambda has a role that allows ec2:CreateNetworkInterface and not the account executing script.
The current role assigned to lambda function allows for the lambda to create VpcConfig.
Check that the account running the script to provision the lambda is allowed the ec2:CreateNetworkInterface action.
I have started getting the following error recently on release change action int eh AWS codePipeline console. Also attaching the screenshot
Action execution failed
Insufficient permissions The provided role does not have permissions
to perform this action. Underlying error: Access Denied (Service:
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID:
CA26EF93E3DAF8F0; S3 Extended Request ID:
mKkobqLGbj4uco8h9wDOBjPeWrRA2ybCrEsVoSq/MA4IFZqJb6QJSrlNrKk/EQK40TfLbTbqFuQ=)
I can't find any resources online anywhere for this error code.
Your pipeline is trying to access a S3 bucket, but AWS CodePipeline ServiceRole does not have permission to access it. Create an IAM policy that provides access to S3 and attach it to the CodePipeline service role.
As #Jeevagan said, you must create a new IAM Policy that grant access to the Pipeline Buckets.
Do not forget to add the following actions:
Action:
- "s3:GetObject"
- "s3:List*"
- "s3:GetObjectVersion"
I lost a few minutes because of this one in particular: GetObjectVersion
By checking your codedeploy-output, you'll be able to see that the process is downloading a particular version of your artefact with the parameter "versionId".
Hope it will help.
You are missing the GetBucketVersioning action in your policy, so the correct example looks like below. I don't know why it's not mentioned anywhere in the reference/documentation:
- PolicyName: AccessRequiredByPipeline
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
Effect: Allow
Resource: !Sub ${YouBucket.Arn}/*
- Action:
- s3:GetBucketVersioning
Resource: !Sub ${YouBucket.Arn}
Effect: Allow
- Action:
- kms:GenerateDataKey
- kms:Decrypt
Effect: Allow
Resource: !GetAtt KMSKey.Arn
Another potential culprit that mascarades behind this error that references S3 is missing KMS permissions on the IAM Role for the CodePipeline. If you configured your CodePipeline to use KMS encryption, then the service role used/associated with the CodePipeline will also need KMS permissions to that KMS key in order to interact with the KMS encrypted objects in S3. In my experience, the missing KMS permissions will cause the same error message to appear which references S3.
I just ran into this issue, but the permissions were all set properly - I used the same CloudFormation template with other projects no problem. It turned out that the key name I was using in the S3 bucket was too long. Apparently it doesn't like anything more than 20 characters. Once I changed the key name in my S3 bucket (and all of its associated references in the CloudFormation template files), everything worked properly
I run into the same issue when I used cloud formation to build my CI/CD, my problem was the CodePipeline ArtifactStore pointed to the wrong location in the S3 ("codepipeline" a not allowed access folder in my case). Changing the ArtifactStore to an existing folder fixed my issue.
You can view pipeline details like where the SourceArtifact is pointed by following this link
I want to grant vpc access for my lambda function. I use the following aws cli command.
aws lambda update-function-configuration \
--function-name SampleFunction \
--vpc-config SubnetIds=subnet-xxxx,SecurityGroupIds=sg-xxxx
But I receive the following error:
An error occurred (AccessDeniedException) when calling the
UpdateFunctionConfiguration operation: Your access has been denied by
EC2, please make sure your request credentials have permission to
DescribeSecurityGroups for sg-xxxx. EC2 Error Code:
UnauthorizedOperation. EC2 Error Message: You are not authorized to
perform this operation.
I have granted the following permission to both my lambda role and the user who execute the aws command.
- "ec2:CreateNetworkInterface"
- "ec2:DescribeNetworkInterfaces"
- "ec2:DeleteNetworkInterface"
- "ec2:DescribeSecurityGroups"
I further tried to grant full access to both the lambda role and the user. But still received the same error
Can anyone suggest what else I can try?
The trick is to add the pipeline / worker role / user which is deploying the lambda function) have access to network related policies. The lambda function should itself suffice with managed policy - AWSLambdaVPCAccessExecutionRole
arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
Action:
ec2:DescribeSecurityGroups
ec2:DescribeSubnets
ec2:DescribeVpcs
Effect: Allow
Resource: '*'
Your users IAM policy needs further permissions.
For example ec2:CreateSecurityGroup & etc. Have a look at this documentation to add requred permissions.
I experienced the same issue. Despite the IAM policy for the user having the required permissions, I could not use the aws cli to crate a lambda function with a VPC config (aws lambda create-function) or modify an existing function to add a VPC config (aws lambda update-function-configuration).
The only way I could get this to work was to create the lambda function without a VPC config. I then modified the function to add the VPC config information (vpc, subnet and security groups) via the AWS console (in Lambda > Fucntions > My Function > Network). I was only able to use the console to do this, introducing a manual step in an otherwise fully automated process.
To answer some of the questions above about which user needs the ec2:DescribeSecurityGroups and related permissions. It is the user running the cli command or logged in to the console. The function does not need a policy providing these permissions. The only special permissions needed for a function with a VPC config are:
ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DeleteNetworkInterface
These allow the function to create ENIs within your VPC using the subnet and security group you provide as described here.
Both the Lambda funtion's role and the user role (either cloudformation or cmline user) must have:
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
- ec2:DescribeSecurityGroups
- ec2:DescribeSubnets
or ec2:* if ok for your use case'security
I had the same issue deploying a lambda with a VPC config using SAM/cloudformation and resolved it by adding this above.
on github issue some people say it is because of cloudformation order creation it is not (or maybe not anymore because I tested adding 20 dummy resource and still the same issue only resolved by adding the permissions above)
cheers,