I created a private REST API in API Gateway (with Lambda proxy integration), which needs to be accessible from a VPC. I've setup a VPC Endpoint for API Gateway in the VPC. The API is accessible from within the VPC, as expected.
The VPC endpoint (and indeed the entire VPC environment) is created via CloudFormation.
The API needs to consume an Authorization header, which is not something I can change. The content of that header is something specific to our company, it's not something standard. The problem is that when I add an Authorization header to the request, API Gateway rejects it with the following error (from API Gateway logs in CloudWatch):
IncompleteSignatureException
Authorization header requires 'Credential' parameter.
Authorization header requires 'Signature' parameter.
Authorization header requires 'SignedHeaders' parameter.
Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header.
Authorization=[the header content here]
If I remove the Authorization header, the request is accepted and I get the expected response from my lambda. The method I'm calling has Auth set to NONE.
The strange thing is that if I delete the VPC endpoint and create it manually via the console, it works correctly - the Authorization header is passed through to my lambda, instead of API Gateway inspecting and rejecting it.
I've torn the endpoint down and recreated it multiple times manually and with CloudFormation and the results are consistent. But I've compared them to each other and they look exactly the same: same settings, same subnets, same security groups, same policy. Since I can see no difference between them, I'm at a bit of a loss as to why it doesn't work with the CloudFormation version.
The only difference I've been able to find is in the aws headers for each version (with Authorization header removed, otherwise it doesn't get as far as logging the headers with the CF endpoint). With the CF endpoint, the headers include x-amzn-vpce-config=0 and x-amzn-vpce-policy-url=MQ==. With the manual endpoint I get x-amzn-vpce-config=1, and the policy-url header isn't included.
I've also tried changing the API to both set and remove the VPC endpoint (it can be set on the API in the Settings section), and redeployed it, but in either case it has no effect - requests continue to work/get rejected as before.
Does anyone have any ideas? I've posted this on the AWS forum as well, but just in case anyone here has come across this before...
If it's of any interest, the endpoint is created like so ([] = redacted):
ApiGatewayVPCEndpoint:
Type: AWS::EC2::VPCEndpoint
Properties:
PrivateDnsEnabled: true
PolicyDocument:
Statement:
- Action: '*'
Effect: Allow
Resource: '*'
Principal: '*'
ServiceName: !Sub com.amazonaws.${AWS::Region}.execute-api
SecurityGroupIds:
- !Ref [my sec group]
SubnetIds:
- !Ref [subnet a]
- !Ref [subnet b]
- !Ref [subnet c]
VpcEndpointType: Interface
VpcId: !Ref [my vpc]
I've managed to get it working, and it's the most ridiculous thing.
This is the endpoint policy in CF (including property name to show it in context):
PolicyDocument:
Statement:
- Action: '*'
Effect: Allow
Resource: '*'
Principal: '*'
This is how that policy appears in the console:
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
This is how the policy appears in describe-vpc-endpoints:
"PolicyDocument": "{\"Statement\":[{\"Action\":\"*\",\"Resource\":\"*\",\"Effect\":\"Allow\",\"Principal\":\"*\"}]}"
Now let's look at the policy of a manually created endpoint.
Console:
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
describe-vpc-endpoints:
"PolicyDocument": "{\n \"Statement\": [\n {\n \"Action\": \"*\", \n \"Effect\": \"Allow\", \n \"Principal\": \"*\", \n \"Resource\": \"*\"\n }\n ]\n}"
The console shows them exactly the same, and the JSON itself returned in describe-vpc-endpoints is the same except for some "prettifying" newlines and whitespace, surely that could have no effect whatsoever? Wrong! It's those newlines that make the policy actually work!
Anyway, the solution is to supply the policy as JSON, for example:
ApiGatewayVPCEndpoint:
Type: AWS::EC2::VPCEndpoint
Properties:
PrivateDnsEnabled: true
PolicyDocument: '
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}'
ServiceName: !Sub com.amazonaws.${AWS::Region}.execute-api
SecurityGroupIds:
- !Ref [my sec group]
SubnetIds:
- !Ref [subnet a]
- !Ref [subnet b]
- !Ref [subnet c]
VpcEndpointType: Interface
VpcId: !Ref [my vpc]
You can even put all the JSON on a single line, it will get the newline characters put in there by AWS at some point. It's just YAML that gets transformed to JSON without newlines and causes this issue.
With the CF resource like that, API Gateway accepts my Authorization header and passes it through to the Lambda without any issues.
Related
In my serverless.yml file, I want to be able to add iamRoleStatements from two differents files (this cannot change). So I tried doing it like this:
provider:
iamRoleStatements:
- ${file(__environments.yml):dev.iamRoleStatements, ''}
- ${file(custom.yml):provider.iamRoleStatements, ''}
Each of these files have an iamRoleStatements section.
__environments.yml:
dev:
iamRoleStatements:
- Effect: 'Allow'
Action: 'execute-api:Invoke'
Resource: '*'
custom.yml:
provider:
iamRoleStatements:
- Effect: "Allow"
Action:
- lambda:InvokeFunction
Resource:
- "*"
Individually, each of them works great. But when I try to run sls deploy with both of them, I encounter the following error:
iamRoleStatements should be an array of objects, where each object has Effect, Action / NotAction, Resource / NotResource fields. Specifically, statement 0 is missing the following properties: Effect, Action / NotAction, Resource / NotResource; statement 1 is missing the following properties: Effect, Action / NotAction, Resource / NotResource
I searched online and this appears to work for other sections of the serverless file such as resources:
# This works perfectly well.
resources:
- ${file(custom.yml):resources, ''}
- ${file(__environments.yml):resources, ''}
So I wonder if there is any solution to this or if it is something that is not currently supported by the Serverless Framework.
Thanks for your help.
You're going to have to jump through a few hoops to get there.
File Merge Limitations
The serverless framework allows file imports anywhere in the configuration but only merges resources and functions sections.
Your example:
provider:
iamRoleStatements:
- ${file(__environments.yml):dev.iamRoleStatements, ''}
- ${file(custom.yml):provider.iamRoleStatements, ''}
Results in an array of arrays like this:
{
"provider": {
"iamRoleStatements": [
[
{
"Effect": "Allow",
"Action": "execute-api:Invoke",
"Resource": "*"
}
],
[
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"*"
]
}
]
]
}
}
You might be able to submit a very small pull request to rectify this.
IAM Managed Policies using References
It might be possible to define each of your IAM roles as custom resources, and use the iamManagedPolicies provider config to point to each of those resources. Something like:
provider:
name: aws
iamManagedPolicies:
- Ref: DevIamRole
- Ref: CustomIamRole
resources:
- ${file(__environments.yml):resources, ''}
- ${file(custom.yml):resources, ''}
Of course you'd need to change the structure of those two files to be AWS::IAM::Role resources.
Custom IAM Role
The framework also gives you the option to take complete control, which is fully documented.
I hope this helps.
I would like to create an S3 bucket that is configured to work as a website, and I would like to restrict access to the S3 website to requests coming from inside a particular VPC only.
I am using Cloudformation to set up the bucket and the bucket policy.
The bucket CF has the WebsiteConfiguration enabled and has AccessControl set to PublicRead.
ContentStorageBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: PublicRead
BucketName: "bucket-name"
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: error.html
The bucket policy includes two conditions: one condition grants access full access to the bucket when on the office IP, and the other condition grants access through a VPC endpoint. The code is as follows:
ContentStorageBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ContentStorageBucket
PolicyDocument:
Id: BucketPolicy
Version: '2012-10-17'
Statement:
- Sid: FullAccessFromParticularIP
Action:
- s3:*
Effect: "Allow"
Resource:
- !GetAtt [ ContentStorageBucket, Arn ]
- Fn::Join:
- '/'
- - !GetAtt [ ContentStorageBucket, Arn ]
- '*'
Principal: "*"
Condition:
IpAddress:
aws:SourceIp: "x.x.x.x"
- Sid: FullAccessFromInsideVpcEndpoint
Action:
- s3:*
Effect: "Allow"
Resource:
- !GetAtt [ ContentStorageBucket, Arn ]
- Fn::Join:
- '/'
- - !GetAtt [ ContentStorageBucket, Arn ]
- '*'
Principal: "*"
Condition:
StringEquals:
aws:sourceVpce: "vpce-xxxx"
To test the above policy conditions, I have done the following:
I've added a file called json.json to the S3 bucket;
I've created an EC2 instance and placed it inside the VPC referenced in the bucket.
I've made a curl request to the file endpoint http://bucket-name.s3-website-us-east-1.amazonaws.com/json.json from inside the whitelisted IP address, and the request succeeds;
I've made a curl request to the file endpoint from inside the EC2 instance (placed in the VPC), and the request fails with a 403 Access Denied
Notes:
I have made sure that the EC2 instance is in the correct VPC.
The aws:sourceVpce is not using the value of the VPC ID, but it is using the value of the Endpoint ID of the corresponding VPC.
I have also used aws:sourceVpc with the VPC ID, instead of using the aws:sourceVpce with the endpoint ID, but this produced the same results as the one mentioned above.
Given this, I currently am not sure how to proceed in further debugging this. Do you have any suggestions about what might be the problem? Please let me know if the question is not clear or anything needs clarification. Thank you for your help!
In order for resources to use the VPC endpoint for S3, the VPC router must point all traffic destined for S3 to the VPC endpoint. Rather than maintain a list of all of the CIDR blocks that are S3 specific on your own, AWS allows you to use BGP prefix lists which are a first-class resource in AWS.
To find the prefix list for S3 run the following command (your output should match mine, since this should be the same region wide across all accounts, but best to check). Use the region of your VPC.
aws ec2 describe-prefix-lists --region us-east-1
I get the following output:
{
"PrefixLists": [
{
"Cidrs": [
"54.231.0.0/17",
"52.216.0.0/15"
],
"PrefixListId": "pl-63a5400a",
"PrefixListName": "com.amazonaws.us-east-1.s3"
},
{
"Cidrs": [
"52.94.0.0/22",
"52.119.224.0/20"
],
"PrefixListId": "pl-02cd2c6b",
"PrefixListName": "com.amazonaws.us-east-1.dynamodb"
}
]
}
For com.amazonaws.us-east-1.s3, the prefix list id is pl-63a5400a,
so you can then create a route in whichever route table services the subnet in question. The Destination should be the prefix list (pl-63a5400a), and the target should be the VPC endpoint ID (vpce-XXXXXXXX) (which you can find with a aws ec2 describe-vpc-endpoints).
This is trivial from the console. I don't remember how to do this from the command line, I think you have to send a cli-input-json with something like the below, but I haven't tested. this is left as an exercise for the reader.
{
"DestinationPrefixListId": "pl-63a5400a",
"GatewayId": "vpce-12345678",
"RouteTableid": "rt-90123456"
}
Access Denied for bucket: appdeploy-logbucket-1cca50r865s65.
Please check S3bucket permission (Service: AmazonElasticLoadBalancingV2; Status Code: 400; Error Code:
InvalidConfigurationRequest; Request ID: e5e2245f-2f9b-11e9-a3e9-2dcad78a31ec)
I want to store my ALB logs to s3 bucket, i have added policies to s3 bucket, but it says access denied, i have tried alot, and worked with so many configurations, but it failed again and again, And my stack Roll back, I have used Troposphere to create template.
I have tried my policies using but it's not wokring.
BucketPolicy = t.add_resource(
s3.BucketPolicy(
"BucketPolicy",
Bucket=Ref(LogBucket),
PolicyDocument={
"Id": "Policy1550067507528",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1550067500750",
"Action": [
"s3:PutObject",
"s3:PutBucketAcl",
"s3:PutBucketLogging",
"s3:PutBucketPolicy"
],
"Effect": "Allow",
"Resource": Join("", [
"arn:aws:s3:::",
Ref(LogBucket),
"/AWSLogs/",
Ref("AWS::AccountId"),
"/*"]),
"Principal": {"AWS": "027434742980"},
}
],
},
))
Any help?
troposphere/stacker maintainer here. We have a stacker blueprint (which is a wrapper around a troposphere template) that we use at work for our logging bucket:
from troposphere import Sub
from troposphere import s3
from stacker.blueprints.base import Blueprint
from awacs.aws import (
Statement, Allow, Policy, AWSPrincipal
)
from awacs.s3 import PutObject
class LoggingBucket(Blueprint):
VARIABLES = {
"ExpirationInDays": {
"type": int,
"description": "Number of days to keep logs around for",
},
# See the table here for account ids.
# https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy
"AWSAccountId": {
"type": str,
"description": "The AWS account ID to allow access to putting "
"logs in this bucket.",
"default": "797873946194" # us-west-2
},
}
def create_template(self):
t = self.template
variables = self.get_variables()
bucket = t.add_resource(
s3.Bucket(
"Bucket",
LifecycleConfiguration=s3.LifecycleConfiguration(
Rules=[
s3.LifecycleRule(
Status="Enabled",
ExpirationInDays=variables["ExpirationInDays"]
)
]
)
)
)
# Give ELB access to PutObject in the bucket.
t.add_resource(
s3.BucketPolicy(
"BucketPolicy",
Bucket=bucket.Ref(),
PolicyDocument=Policy(
Statement=[
Statement(
Effect=Allow,
Action=[PutObject],
Principal=AWSPrincipal(variables["AWSAccountId"]),
Resource=[Sub("arn:aws:s3:::${Bucket}/*")]
)
]
)
)
)
self.add_output("BucketId", bucket.Ref())
self.add_output("BucketArn", bucket.GetAtt("Arn"))
Hopefully that helps!
The principal is wrong in the CloudFormation template. You should use the proper principal AWS Account Id for your region. Lookup the proper value in this document:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html#access-logging-bucket-permissions
Also, you could narrow down your actions. If you just want to push ALB logs to S3, you only need:
Action: s3:PutObject
Here's a sample BucketPolicy Cloudformation that works (you can easily translate that into the troposphere PolicyDocument element):
Resources:
# Create an S3 logs bucket
ALBLogsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub "my-logs-${AWS::AccountId}"
AccessControl: LogDeliveryWrite
LifecycleConfiguration:
Rules:
- Id: ExpireLogs
ExpirationInDays: 365
Status: Enabled
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
DeletionPolicy: Retain
# Grant access for the load balancer to write the logs
# For the magic number 127311923021, refer to https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html#access-logging-bucket-permissions
ALBLoggingBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ALBLogsBucket
PolicyDocument:
Statement:
- Effect: Allow
Principal:
AWS: 127311923021 # Elastic Load Balancing Account ID for us-east-1
Action: s3:PutObject
Resource: !Sub "arn:aws:s3:::my-logs-${AWS::AccountId}/*"
I am trying to build a CloudFormation script that sets up a Cognito User Pool and configures it to use a custom email for sending users their validation code in the signup process (i.e. FROM: noreply#mydomain.com).
I am getting this error when executing my AWS CloudFormation script:
"ResourceStatusReason": "Cognito is not allowed to use your email identity (Service: AWSCognitoIdentityProvider; Status Code: 400; Error Code: InvalidEmailRoleAccessPolicyException;
I have attached a Policy for Cognito to use my SES email identity e.g. noreply#mydomain.com. I have manually setup and validated this email identity in SES prior to running CloudFormation script.
Here is my CloudFormation configuration for the policy to allow Cognito to send emails on my behalf e.g. From noreply#mydomain.com:
CognitoSESPolicy:
Type: AWS::IAM::ManagedPolicy
Description: "Allow Cognito the send email on behalf of email identity (e.g. noreply#example.org)"
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: "ucstmnt0001"
Effect: "Allow"
Action:
- "ses:SendEmail"
- "ses:SendRawEmail"
Resource: !FindInMap [ environment, !Ref "Environment", emailARN ]
SESRole:
Type: AWS::IAM::Role
Description: "An IAM Role to allow Cognito to send email on behalf of email identity"
Properties:
RoleName: uc-cognito-ses-role
ManagedPolicyArns:
- Ref: CognitoSESPolicy
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- cognito-idp.amazonaws.com
DependsOn: CognitoSESPolicy
I am not sure what I am doing wrong here...
Answering my own question for others' benefit. AWS SES has its own managed identity for emails, requiring a user to verify ownership of the email before it can be used by other AWS services. My solution was to manually setup the SES email account using AWS portal, verify the email account, then reference the ARN for the identity created in SES for email in my CloudFormation script. Maybe AWS will have a way in the future to create SES identity via CloudFormation scripts, but at this time it seems that manual process is required for initial setup.
Recently ran into this issue and could not find a way to add it via Cloudformation still. Was able to use aws ses put-identity-policy instead.
ses_policy=$(cat << EOM
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "cognito-idp.amazonaws.com"
},
"Action": [
"ses:SendEmail",
"ses:SendRawEmail"
],
"Resource": "${email_arn}"
}
]
}
EOM
)
aws ses put-identity-policy \
--identity "${email_arn}" \
--policy-name "${policy_name}" \
--policy "${ses_policy}"
Instead of cat you can use read but my script was already using set -o errexit and not worth changing to be purist for no particular reason.
Can I impose a restriction on an IAM policy to run on a specific account only? I have searched for documentation or examples online but could not find anything on it.
Edited:
There are multiple accounts and similar policies to implement, each with a different restriction. In such cases, to prevent any mixing up while implementation of policies; I want to ensure that there is a restriction that imposed on the policy that tells which AWS account this policy can live in.
CFT:
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Policy for XYZ'
Resources:
XYZPolicy:
Type: "AWS::IAM::ManagedPolicy"
Properties:
Description: "Restrictions apply only to Account XYZ"
Path: "/"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "cloudformation:*"
- "cloudtrail:*"
- "cloudwatch:*"
- "ec2:*"
- "sso:*"
- "s3:*"
Resource: "*"
Roles:
-
Ref: "XYZRole"
ManagedPolicyName: "XYZPolicy
By default, a policy only applies to the account it belongs to, although you can enable cross-account policies, so what you're trying to do is basically the default.
If you really want to do this anyways, you may be able try something along the lines of:
"Condition": {
"ArnEquals": {
"iam:PolicyArn": [
"arn:aws:iam::AWS-ACCOUNT-ID:policy/XYZPolicy"
]
}
}