AccessDenied: User is not authorized to perform: cloudfront:CreateInvalidation - amazon-web-services

I'm trying to deploy an ember app to AWS CloudFront using ember-cli-deploy and ember-cli-deploy-cloudfront.
I set up my bucket and user in AWS, gave my user AmazonS3FullAccess policy.
Set up my .env.deploy.production file to look like this:
AWS_KEY=<my key>
AWS_SECRET=<my secret>
PRODUCTION_BUCKET=<app.<my domain>.com
PRODUCTION_REGION=us-east-1
PRODUCTION_DISTRIBUTION=<my cloudfront distribution id>
My config/default.js looks like this:
/* jshint node: true */
module.exports = function(deployTarget) {
var ENV = {
build: {},
pipeline: {
activateOnDeploy: true
},
s3: {
accessKeyId: process.env.AWS_KEY,
secretAccessKey: process.env.AWS_SECRET,
filePattern: "*"
},
cloudfront: {
accessKeyId: process.env.AWS_KEY,
secretAccessKey: process.env.AWS_SECRET
}
};
if (deployTarget === 'staging') {
ENV.build.environment = 'production';
ENV.s3.bucket = process.env.STAGING_BUCKET;
ENV.s3.region = process.env.STAGING_REGION;
ENV.cloudfront.distribution = process.env.STAGING_DISTRIBUTION;
}
if (deployTarget === 'production') {
ENV.build.environment = 'production';
ENV.s3.bucket = process.env.PRODUCTION_BUCKET;
ENV.s3.region = process.env.PRODUCTION_REGION;
ENV.cloudfront.distribution = process.env.PRODUCTION_DISTRIBUTION;
}
return ENV;
};
I installed ember-cli-deploy, ember-cli-deploy-cloudfront and ember install ember-cli-deploy-aws-pack.
When I run ember deploy production
I get this error:
AccessDenied: User: arn:aws:iam::299188948670:user/Flybrary is not authorized to perform: cloudfront:CreateInvalidation
It's my understanding that ember-cli-deploy-cloudfront handles creating invalidations for you but when I saw this error I went into the AWS IAM console and created an invalidation myself. I still get the same error when I try to run ember deploy production.

IAM Policies do not allow restriction of access to specific CloudFront distributions. The work around is to use a wildcard for the resource, instead of only referencing a specific CloudFront resource. Adding that to your IAM policy will work around the issue you're having.
Here is an example of that in a working IAM policy:
{
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations"
],
"Resource": "*"
}
]
}
Docs:
AWS Services That Work with IAM
CloudFront API Permissions
Using Identity-Based Policies (IAM Policies) for CloudFront

Related

Backup vault creation failed because of insufficient privileges

I am trying to create a backup plan, rule and vault using AWS CDK. After deploying the application I receive the following error in cloudformation console.
Resource handler returned message: "Insufficient privileges to perform this action. (Service: Backup, Status Code: 403, Request ID: xxxxxxx)" (RequestToken: xxxxxxxxx, HandlerErrorCode: GeneralServiceException)
My CDK bootstrap role definitely have access to backup. See policy document below.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "cdk",
"Effect": "Allow",
"Action": [
"lambda:*",
"logs:*",
"serverlessrepo:*",
"servicediscovery:*",
"ssm:*",
"cloudformation:*",
"kms:*",
"iam:*",
"sns:*",
"dynamodb:*",
"codepipeline:*",
"cloudwatch:*",
"events:*",
"acm:*",
"sqs:*",
"backup:*"
],
"Resource": "*"
}
]
}
Following are my CDK code snippets:
backup-rule
ruleName: 'myTestRuleName',
completionWindow: Duration.hours(1),
startWindow: Duration.hours(1),
scheduleExpression: events.Schedule.cron({
day: '*',
hour: '2'
}),
deleteAfter: Duration.days(90),
backup-vault
I tried without encryptionKey and also with a key that I have created through AWS backup web interface. None worked
new backup.BackupVault(this, `${id}-instance`, {
backupVaultName: props.backupVaultName,
// encryptionKey: this.key
})
backup-plan
new BackupPlan(scope, `${id}-instance`, {
backupPlanName: context.backupPlanName,
backupPlanRules: context.backupPlanRules,
// backupVault: context.backupVault
});
backup selection
I also tried without creating the role and letting AWS CDK create and use the default role.
NOTE: I have also tried created plan, rule and vault without resource selection to make sure, that the problem does not occur on the resource selection side.
const role = new iam.Role(this, 'Backup Access Role', { assumedBy: new iam.ServicePrincipal('backup.amazonaws.com') });
const managedPolicy = iam.ManagedPolicy.fromManagedPolicyArn(this, 'AWSBackupServiceRolePolicyForBackup', 'arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForBackup');
role.addManagedPolicy(managedPolicy);
plan.backupPlan.addSelection('selection',
{
resources: [
BackupResource.fromDynamoDbTable(MyTable)
],
//role: role
}
)
``
I faced this problem too and adding permissions for backup-storage solved it for me! Referenced the AWSBackupFullAccess permissions

Unable to use terraform with AWS IAM role with MFA configuration

My organisation uses a gateway account for which i have aws credentials.
We also have our personal account, in order to access our personal account users in gateway account assume IAM roles ( created in the personal account).
With such configuration i am trying to create terraform resource but somehow keep on getting error -> Error: operation error STS: AssumeRole, https response error StatusCode: 403, RequestID: xxxxxxx, api error AccessDenied: User: arn:aws:iam::xxxxxx:user/xx-xxxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxxxxxx2:role/xxxxxx
Here is the provider configuration i am trying.
provider "aws" {
alias = "mad"
profile = "personal account"
region = "ap-south-1"
assume_role {
role_arn = "arn:aws:iam::xxxxxxx:role/personal account"
}
}
Update :- the role uses mfa too.
Personal account has trust relationship which allows gatgeway account iam user to assume to role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::gateway-account-id:user/user"
},
"Action": "sts:AssumeRole",
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]
}
The user user/xx-xxxxxx which you use to run the TF script which is going to assume role role/xxxxxx must have sts:AssumeRole.
You can add such permission to the user, by adding the following inline policy to it:
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::xxxxxxx2:role/xxxxxx"
]
}
UPDATE
Also for MFA you need to use token option in your provider configuration, or use any of the workarounds provided in TF github issue.

AWS IAM - S3: "Error putting S3 server side encryption configuration: AccessDenied" even when I am the Administrator

I am the admin of my AWS account arn:aws:iam::aws:policy/AdministratorAccess policy assigned to my, that gives permissions for all actions on all resources.
I am terraforming an S3 bucket that looks like this:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my_bucket"
acl = "log-delivery-write"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
versioning {
enabled = true
}
}
but when I apply the plan I am getting: Error: error putting S3 server side encryption configuration: AccessDenied: Access Denied
That is a weird error concerning I am the admin.
Getting the same error in the console:
You don't have permissions to edit default encryption
After you or your AWS administrator have updated your permissions to allow the s3:PutEncryptionConfiguration action, choose Save changes.
That is not true. The arn:aws:iam::aws:policy/AdministratorAccess policy has the following JSON:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
Any ideas what is going on?
P.S: I could successfully run the same HCL in another playground account with the same access. It seems I cannot on the one I want to deploy it, which makes zero sense.

AWS assumed-role unable to perform secretsmanager:GetSecretValue in serverless project

I have a serverless project written with node.js.
This service defines an IAM role for use at runtime with the following policy:
{
"Version": "2012-10-17",
"Statement": [
// statement to allow logging omitted
// statement for VPC stuff omitted (CreateNetworkInterface, etc)
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue"
],
"Resource": "arn:aws:secretsmanager:eu-west-1:*:secret:my_secret_name-*"
}
]
}
I have a lambda that then tries to read that secrets:
import SecretsManager from 'aws-sdk/clients/secretsmanager' // v2
...
export const handler: ValidatedEventAPIGatewayProxyEvent<typeof schema> = async (event, context: any) => {
try {
const sm = new SecretsManager({ region }) // region is defined as "eu-west-1"
const secret = await sm.getSecretValue({
SecretId: 'my_secret_name'
})
} catch (e) {
console.error(e)
}
This errors with the following:
AccessDeniedException: User: arn:aws:sts::{accountId}:assumed-role/my-service-lambda-role/my-service-my-stage-my-function is not authorized to perform: secretsmanager:GetSecretValue
I'm not sure why the permissions would not allow me to retrieve the secret. I'm further not sure why this is using sts and using an assumed-role rather than just using the serverless lambda role directly. Can someone explain this to me and how to fix this?
NOTE: using the policysimulator, I can confirm the role created with the above policy does have access to read the defined secret, so this must be to do with assumed roles?

Access AWS S3 unauthenticated with Contigo IdentityPoolId, Error: CredentialsError: Missing credentials in config

I'm trying to setup connection to S3 via Angular application.
I did as written in this guide:
Create an Identity Pool
Configure Role Permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME"
]
}
]
}
Configure CORS
Defining the Webpage
Add the SDK to the browser script
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.728.0.min.js"></script>
Configuring the SDK
const AWS = (window as any).AWS;
AWS.config.region = 'eu-west-1'; // Region
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: 'eu-west-1:bla-bla-bla',
});
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
params: { Bucket: 'my-bucket' }
});
When I run this:
s3.listObjects({ Delimiter: '/' }, (err, data) => {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data.Buckets);
}
});
I see in the AWS console that there is a connection:
But still get the error: Error CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
message: "Invalid identity pool configuration. Check assigned IAM roles for this pool." __type: "InvalidIdentityPoolConfigurationException"
What am I missing here? I thought IdentityPoolId is all that is needed