Access Denied in attempt to Create Project in AWS CodeBuild - amazon-web-services

According to the AWS CodeBuild documentation, the Create Project operation requires only the codebuild:CreateProject and iam:PassRole Actions to be granted. I have done this in my role's policy, and set the Resource to "*", but when I click on the Create Project button, I immediately get Access Denied with no further information. I have no problems doing the analogous operation in CodeArtifact, CodePipeline, and CodeCommit. If I set "s3:*", I do not get the error, so evidently it's an S3 permission I'm missing, but which one?
What I am trying to do is create a role with reduced permissions so that a user can run a build and manage CodeSuite resources (add and edit repositories, pipelines, etc.) without using Administrator privileges.
Here is my policy JSON:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*Object",
"s3:*ObjectVersion",
"s3:*BucketAcl",
"s3:*BucketLocation",
"iam:*",
"codepipeline:*",
"codeartifact:*",
"codecommit:*",
"codebuild:*"
],
"Resource": "*"
}
]
}
(I am aware this configuration is inadvisable; I am trying to isolate the issue, and provide a minimum reproducible example)

With a little bit of educated trial and error, I narrowed it down to a List* Action, which is sufficiently specific for my purposes. I'm guessing it's ListObjects and ListObjectVersions, but I'm too lazy to confirm it.

Related

CLIENT_ERROR: authorization failed for primary source and source version

I opened a free AWS account to learn and created an Administrator user group and user in IAM for myself.
I am following a tutorial "Automating your API testing with AWS CodeBuild, AWS CodePipeline, and Postman."
I am getting the error CLIENT_ERROR: authorization failed for primary source and source version in the DOWNLOAD_SOURCE phase of the Build transition in CodePipeline.
I followed the directions in an earlier post at AWS CodeBuild failed CLIENT_ERROR: authorization failed for primary source and source version with no success.
I added and attached a policy for connection-permissions in my service role as directed like so:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codestar-connections:UseConnection",
"Resource": "insert connection ARN here"
}
]
}
Later, I changed the Action above to
"codepipeline:GetPipelineState"
I added and attached a policy for GitPull like so:
{
"Action": [
"codecommit:GitPull"
],
"Resource": "*",
"Effect": "Allow"
},
I have disconnected and reconnected my connection to GitHub and also tried creating a new personal access token with no success.
I have tried changing my S3 to public and Allow with
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*"
}
]
}
I also tried updating my node in the source code to 16.18.0.
I am stuck. The resources I have found keep pointing me to the same AWS page I mentioned. I don't know what else to do. I would appreciate any help.
My repo is located at https://github.com/venushofler/my-aws-codepipeline-codebuild-with-postman.git
The answer to the above was to add a default set of access permissions to my users, groups, and roles in my account. I found documentation at https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html which in part stated, "To add a default set of CodeBuild access permissions to an IAM group or IAM user, choose Policy Type, AWS Managed, and then do the following:
To add full access permissions to CodeBuild, select the box named AWSCodeBuildAdminAccess, choose Policy Actions, and then choose Attach. "
This worked to allow the Build and Deploy stage to succeed.

AWS Interactive Video Service - ivs.AccessDeniedException

I am following the AWS tutorial on how to set up the new video streaming product IVC https://docs.aws.amazon.com/ivs/latest/userguide/GSIVS.html
I set up a IAM user with the following permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ivs:CreateChannel"
],
"Resource": "*"
}
]
}
But when I try to create a channel with logged in as the above mentioned IAM user I get the error
ivs.AccessDeniedException:
User: arn:aws:iam::532654645459:user/alex-iam is not authorized to perform:
ivs:CreateChannel on resource: *
Am I missing something? Here are is screenshots for the policy setup.
(OP here) The solution that worked for me was to change the policy to grant all permissions to IVS for IAM user as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ivs:*"
],
"Resource": "*"
}
]
}
Everything worked fine afterward (create channels, list channels, view channels details).
To deal with this issue, best is to reach out to AWS Support Center via “Account and billing support”. For details about the case, select “Account” for the Type and “Other Account Issues” for the Category. As for the subject and description, please provide as many details about the error as possible, such as the error code above.
What could also be helpful is to (especially on a fresh AWS account) is to spin up p/ launch an EC2 instance (Micro or whatever) and spin it back down. Try using IVS after that and see if that helped.
What type of account are you using ( free tier, educate account)?
In educate account IAM users do no have access to some services. This might be on of the issues.
I have solved the problem by add policy to the lambda function
Go to the AWS IAM page and navigate to Role.
And then find the role for your lambda function and click add permission button
and create inline policy
There, you can create and attach policy to role.
as you written on above.
After that, your functions will work well

Could not create role AWSCodePipelineServiceRole

I'm trying to auto-deploy my static websites Github changes to my s3 bucket and when I went to create the pipeline, it threw a "Could not create role AWSCodePipelineServiceRole" error.
My github has permissions setup correctly. The repo name, bucket name, and object key are correct.
Has anyone ever encountered this?
I resolved this issue by:
Step 1: adding the deployment user I was logged on as into a
Deployers Group, to which I granted the IAMFullAccess policy.
Step 2: I successfully created the pipeline by following the same
steps as indicated by the AWS tutorial.
Step 3: once create, I
reversed engineered the group and single policy attached to it that
the wizard created. It showed a really long policy that you can't
really invent. The IAM section being:
"Statement": [
{
"Action": [
"iam:PassRole"
],
"Resource": "*",
I am just concerned that the Deployers group I created now has IAMFullAccess...
Also, I found that if you are logged as an admin, and add privileges to an IAM user, that user may not immediately enjoy these new privileges. I decided to log out and log back in to commit them. Maybe there is a lighter way, but I couldn't find it.
The reason behind the issue was that your IAM user (the user you are logged in as) is restricted to create role with service role name 'AWSCodePipelineServiceRole'.
In order to provide IAM user permission to create role with service role name ‘AWSCodePipeline*’ e.g. ‘AWSCodePipelineServiceRole-us-east-1-test’, you need to attach the below policy to your IAM user:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "iam:CreateRole",
"Resource": "arn:aws:iam::*:role/AWSCodePipeline*"
}
]
}
Try couple of things:
Try to create the IAM role with different name (e.g. AWSCodePipelineServiceRole2020).
Give the pipeline a different name and keep the role name as it is
(auto generated) by pipline.
I hope this will help.
I had to add these 4 policies to get the CodePipeline creation issue fixed.
"iam:CreateRole",
"iam:CreatePolicy",
"iam:AttachRolePolicy",
"iam:PassRole"

AWS permissions for Fargate and SSM

I'm trying to create some infrastructure for a service I am building on AWS using AWS Fargate. I'm using SSM as a value store for some of my application configuration, so I need both the regular permissions for Fargate as well as additional permissions for SSM. However, after banging my head against this particular wall for a while, I've come to the conclusion that I just don't understand AWS IAM in general or this problem in particular, so I'm here for help.
The basis of my IAM code comes from this tutorial; the IAM code is actually not in that tutorial but rather in this file in the github repo linked to that tutorial. I presume I need to retain that STS permission for something although I'm not entirely sure what.
I've converted the IAM code from the tutorial into a JSON document because I find JSON easier to work with than the Terraform native thing. Here's what I've come up with. It doesn't work. I would like to know why it doesn't work and how to fix it. Please ELI5 (explain like I'm 5 years old) because I know nothing about this.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:GetParameters",
"secretsmanager:GetSecretValue",
"kms:Decrypt",
"sts:AssumeRole"
],
"Principal": {
"Service": ["ecs-tasks.amazonaws.com"]
}
}
]
}
At a minimum, your ECS task should have below permissions:
Ability to assume a role
Resource level permissions
In the example, you have referred, An IAM Role is created with the following:
A trust relationship is attached. <-- To enable ECS task to assume an IAM role
AWS managed policy AmazonECSTaskExecutionRolePolicy is attached. <-- Resource permissions
So, in order to retrieve the SSM parameter values, add below resource permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:Describe*",
"ssm:Get*",
"ssm:List*"
],
"Resource": [
"arn:aws:ssm:*:*:parameter/{your-path-hierarchy-to-parameter}/*"
]
}
]
}
If your Secrets uses KMS, then grant necessary kms permissions (kms:Decrypt). Refer specifying-sensitive-data for reference.

Elastic Beanstalk deployment stuck on updating config settings

I've been testing my continuous deployment setup, trying to get to a minimal set of IAM permissions that will allow my CI IAM group to deploy to my "staging" Elastic Beanstalk environment.
On my latest test, my deployment got stuck. The last event in the console is:
Updating environment staging's configuration settings.
Luckily, the deployment will time out after 30 minutes, so the environment can be deployed to again.
It seems to be a permissions issue, because if I grant s3:* on all resources, the deployment works. It seems that when calling UpdateEnvironment, Elastic Beanstalk does something to S3, but I can't figure out what.
I have tried the following policy to give EB full access to its resource bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP/*",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID/*"
]
}
]
}
Where REGION, ACCOUNT, APP, and ENV_ID are my AWS region, account number, application name, and environment ID, respectively.
Does anyone have a clue which S3 action and resource EB is trying to access?
Shared this on your blog already, but this might have a broader audience so here it goes:
Following up on this, the ElastiBeanstalk team has provided me with the following answer regarding the S3 permissions:
"[...]Seeing the requirement below, would a slightly locked down version work? I've attached a policy to this case which will grant s3:GetObject on buckets starting with elasticbeanstalk. This is essentially to allow access to all elasticbeanstalk buckets, including the ones that we own. The only thing you'll need to do with our bucket is a GetObject, so this should be enough to do everything you need."
So it seems like ElasticBeanstalk is accessing buckets out of anyone's realm in order to work properly (which is kind of bad, but that's just the way it is).
Coming from this, the following policy will be sufficient for getting things to work with S3:
{
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/*"
],
"Effect": "Allow"
},
{
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::elasticbeanstalk*",
"Effect": "Allow"
}
Obviously, you need to wrap this into a proper policy statement that IAM understands. All your previous assumptions about IAM policies have proven right though so I'm guessing this shouldn't be an issue.