CLIENT_ERROR: authorization failed for primary source and source version - amazon-web-services

I opened a free AWS account to learn and created an Administrator user group and user in IAM for myself.
I am following a tutorial "Automating your API testing with AWS CodeBuild, AWS CodePipeline, and Postman."
I am getting the error CLIENT_ERROR: authorization failed for primary source and source version in the DOWNLOAD_SOURCE phase of the Build transition in CodePipeline.
I followed the directions in an earlier post at AWS CodeBuild failed CLIENT_ERROR: authorization failed for primary source and source version with no success.
I added and attached a policy for connection-permissions in my service role as directed like so:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codestar-connections:UseConnection",
"Resource": "insert connection ARN here"
}
]
}
Later, I changed the Action above to
"codepipeline:GetPipelineState"
I added and attached a policy for GitPull like so:
{
"Action": [
"codecommit:GitPull"
],
"Resource": "*",
"Effect": "Allow"
},
I have disconnected and reconnected my connection to GitHub and also tried creating a new personal access token with no success.
I have tried changing my S3 to public and Allow with
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*"
}
]
}
I also tried updating my node in the source code to 16.18.0.
I am stuck. The resources I have found keep pointing me to the same AWS page I mentioned. I don't know what else to do. I would appreciate any help.
My repo is located at https://github.com/venushofler/my-aws-codepipeline-codebuild-with-postman.git

The answer to the above was to add a default set of access permissions to my users, groups, and roles in my account. I found documentation at https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html which in part stated, "To add a default set of CodeBuild access permissions to an IAM group or IAM user, choose Policy Type, AWS Managed, and then do the following:
To add full access permissions to CodeBuild, select the box named AWSCodeBuildAdminAccess, choose Policy Actions, and then choose Attach. "
This worked to allow the Build and Deploy stage to succeed.

Related

AWS IAM autorisation: how to give access to service with aws console?

I have a "root" account.
I created an "admin" account which has all the right.
I created an account "dev" and I want it to only have acces to certain services:
s3
dynamoDB
cloudWatch
API Gateway
Lambda
Cognito
So I created a policy with the aws console editor and I gave full access to theses ressources and allows everything, it gave me this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:*",
"apigateway:*",
"lambda:*",
"dynamodb:*",
"cognito-idp:*"
],
"Resource": "*"
}
]
}
Looks good to me (not specific enough but good for a beginner).
Problem: I created db, lambda, api gateway, etc... but I can't see the services with this, which autorisation should I give for the "dev" role to see the items in the AWS console ?
I found it, I only needed to switch my region in the top right corner of the console. (shame on me)

Databricks AWS account setup - AWS storage with error - Missing permissions: PUT, LIST, DELETE

I have created a PREMIUM trail Databricks account with AWS. I have setup AWS account with user access keys.
And for configuring AWS storage followed the below instructions in the URL(setup bucket policy as below in below URL).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Grant Databricks Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::98765432101:root"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-databricks-user-bucket/*",
"arn:aws:s3:::my-databricks-user-bucket"
]
}
]
}
https://docs.databricks.com/administration-guide/account-settings/aws-storage.html
But, I am getting the error as below.
The provided S3 bucket is valid, but have insufficient permissions to
launch a Databricks deployment. Please double check your settings
according to the tutorial. Missing permissions: PUT, LIST, DELETE
In the above bucket policy which I used, PUT, LIST, DELETE policies are there. Still facing the above error.
Note: As trail and error, changed the Action as below which allows all actions. But, still getting the same error.
"Action": "*"
The above error is caused because of the mistake I did when I am setting up Databricks account with AWS.
As part of setting up AWS account details in Databricks, a cross-account-role should be created (alternative is through access key). When creating the role, AWS account id should be given(Databricks AWS account id). The value of that is 414351767826.
The mistake I did was I gave my AWS account ID instead of Databricks one. Following as it is in the below URL will work as expected.
The same issue I did when I am setting AWS storage. Following the documentation as it is will work perfectly.
https://docs.databricks.com/administration-guide/account-settings/aws-accounts.html

AWS Backup: Missing permission iam:PassRole

I'm currently spinning in circles trying to restore from an AWS Backup and am running into permissions errors. I have administrator access to my AWS account. I've tried creating a new policy and attach it to my user account in IAM as follows:
The issue I can't seem to get around is that I need to add the permission iam:PassRole but I can't seem to find it anywhere within the AWS portal. How can I add this permission to my policy?!
EDIT: I've created a policy with all backup permissions allowed and including iam:PassRole however I am still receiving the error message You are not authorized to perform this operation. when trying to perform the backup. The policy I've created and attached to my user looks as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"backup:*",
"iam:PassRole",
"iam:GetRole"
],
"Resource": "*"
}
]
}
“To successfully do a restore with the original instance profile, you will need to make changes to the restore policy. If you apply instance profile during the restore, you must update the operator role and add PassRole permissions of the underlying instance profile role to EC2. Otherwise, Amazon EC2 won’t be able to authorize the instance launch and it will fail.”
Here is the policy you can attach to the AWS default Backup role “AWSBackupDefaultServiceRole” to work around this issue:
{
"Version": "2012–10–17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::<Account-ID>:role/*"
}]}
Source: https://medium.com/contino-engineering/new-aws-backup-features-for-quick-and-easy-ec2-instance-recovery-c8887365ca6a

aws s3 upload fail only at production envrionment, but success at local environment

I tried to upload image using aws-sdk, multer-s3.
In my local environment, uploading image was succeed, but in production environment(aws lambda), it fail with error status 403 forbidden.
But my aws credential key and secret-key is same as local environment. also i checked aws key in production environment successfully.
I think difference between two other environment is nothing.What am I missing?
I have even tried setting aws key in my router code like below, but it also failed.
AWS.config.accessKeyId = 'blabla';
AWS.config.secretAccessKey = 'blalbla';
AWS.config.region = 'ap-northeast-2';
and here is my policy
{
"Id": "Policy1536755128154",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1536755126539",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::elebooks-image/*",
"Principal": "*"
}
]
}
Update your attached s3 bucket policy to a user according to below policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET",
"arn:aws:s3:::YOUR-BUCKET/*"
]
}
]
}
it's working on my server.
I haven't worked with AWS Lambda but I am familiar with S3. When you're using the AWS SDK in your local environment, you're probably using the root user with default full access, so it will just work.
With Lambda however, according to the following extract from the documentation, you need to make sure that the IAM role you specified when you created the Lambda function has the appropriate permissions to do an s3:putObject to that bucket.
Permissions for your Lambda function – Regardless of what invokes a Lambda function, AWS Lambda executes the function by assuming the IAM role (execution role) that you specify at the time you create the Lambda function. Using the permissions policy associated with this role, you grant your Lambda function the permissions that it needs. For example, if your Lambda function needs to read an object, you grant permissions for the relevant Amazon S3 actions in the permissions policy. For more information, see Manage Permissions: Using an IAM Role (Execution Role).
See Writing IAM policies: How to grant access to an S3 bucket

Unable to setup AWS Account on Spinnaker

I followed following steps to add and configure AWS account in Spinnaker:
hal config provider aws account add my-aws-acc --account-id xxxxxxxxxxxx --assume-role SpinnakerManaged
hal config provider aws enable
AWS Account Setup
SpinnakerManaged Role is having following policies attached :
pass_role_policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
}
]
}
Power User Access
Server on which spinnaker is hosted is attached SpinnakerAuth Role which has following policies:
PowerUser Access
Pass_role_policy
assume_role_policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
}
]
}
command: hal deploy apply
Spinnaker gets successfully deployed while clouddriver service with port 7002 doesn't come up
Error in /var/log/spinnaker/cloudriver/clouddriver.log file : Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException: Not authorized to perform sts:AssumeRole (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied;
This is related to the trust relationship in the AWS IAM configuration. The deployment of AWS IAM permissions for the cases described below has been improved in the spinnaker.io documentation.
Use a Managing AWS User with AWS Key and Secret with the policy that allows to assume the ManagedTargetRole
Use a Managing Role with the policy that allows to assume the ManagedTargetRole
Please refer to this option and deploy again.
In my case the local debian installation in Spinnaker never worked for me. I was successfully able to deploy Spinnaker by using the project Minnaker for PoC.