I followed following steps to add and configure AWS account in Spinnaker:
hal config provider aws account add my-aws-acc --account-id xxxxxxxxxxxx --assume-role SpinnakerManaged
hal config provider aws enable
AWS Account Setup
SpinnakerManaged Role is having following policies attached :
pass_role_policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
}
]
}
Power User Access
Server on which spinnaker is hosted is attached SpinnakerAuth Role which has following policies:
PowerUser Access
Pass_role_policy
assume_role_policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
}
]
}
command: hal deploy apply
Spinnaker gets successfully deployed while clouddriver service with port 7002 doesn't come up
Error in /var/log/spinnaker/cloudriver/clouddriver.log file : Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException: Not authorized to perform sts:AssumeRole (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied;
This is related to the trust relationship in the AWS IAM configuration. The deployment of AWS IAM permissions for the cases described below has been improved in the spinnaker.io documentation.
Use a Managing AWS User with AWS Key and Secret with the policy that allows to assume the ManagedTargetRole
Use a Managing Role with the policy that allows to assume the ManagedTargetRole
Please refer to this option and deploy again.
In my case the local debian installation in Spinnaker never worked for me. I was successfully able to deploy Spinnaker by using the project Minnaker for PoC.
Related
I opened a free AWS account to learn and created an Administrator user group and user in IAM for myself.
I am following a tutorial "Automating your API testing with AWS CodeBuild, AWS CodePipeline, and Postman."
I am getting the error CLIENT_ERROR: authorization failed for primary source and source version in the DOWNLOAD_SOURCE phase of the Build transition in CodePipeline.
I followed the directions in an earlier post at AWS CodeBuild failed CLIENT_ERROR: authorization failed for primary source and source version with no success.
I added and attached a policy for connection-permissions in my service role as directed like so:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codestar-connections:UseConnection",
"Resource": "insert connection ARN here"
}
]
}
Later, I changed the Action above to
"codepipeline:GetPipelineState"
I added and attached a policy for GitPull like so:
{
"Action": [
"codecommit:GitPull"
],
"Resource": "*",
"Effect": "Allow"
},
I have disconnected and reconnected my connection to GitHub and also tried creating a new personal access token with no success.
I have tried changing my S3 to public and Allow with
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*"
}
]
}
I also tried updating my node in the source code to 16.18.0.
I am stuck. The resources I have found keep pointing me to the same AWS page I mentioned. I don't know what else to do. I would appreciate any help.
My repo is located at https://github.com/venushofler/my-aws-codepipeline-codebuild-with-postman.git
The answer to the above was to add a default set of access permissions to my users, groups, and roles in my account. I found documentation at https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html which in part stated, "To add a default set of CodeBuild access permissions to an IAM group or IAM user, choose Policy Type, AWS Managed, and then do the following:
To add full access permissions to CodeBuild, select the box named AWSCodeBuildAdminAccess, choose Policy Actions, and then choose Attach. "
This worked to allow the Build and Deploy stage to succeed.
I'm currently spinning in circles trying to restore from an AWS Backup and am running into permissions errors. I have administrator access to my AWS account. I've tried creating a new policy and attach it to my user account in IAM as follows:
The issue I can't seem to get around is that I need to add the permission iam:PassRole but I can't seem to find it anywhere within the AWS portal. How can I add this permission to my policy?!
EDIT: I've created a policy with all backup permissions allowed and including iam:PassRole however I am still receiving the error message You are not authorized to perform this operation. when trying to perform the backup. The policy I've created and attached to my user looks as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"backup:*",
"iam:PassRole",
"iam:GetRole"
],
"Resource": "*"
}
]
}
“To successfully do a restore with the original instance profile, you will need to make changes to the restore policy. If you apply instance profile during the restore, you must update the operator role and add PassRole permissions of the underlying instance profile role to EC2. Otherwise, Amazon EC2 won’t be able to authorize the instance launch and it will fail.”
Here is the policy you can attach to the AWS default Backup role “AWSBackupDefaultServiceRole” to work around this issue:
{
"Version": "2012–10–17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::<Account-ID>:role/*"
}]}
Source: https://medium.com/contino-engineering/new-aws-backup-features-for-quick-and-easy-ec2-instance-recovery-c8887365ca6a
I have 2 AWS accounts:
- account A that has an ECR repo.
- account b that has an ECS cluster running Fargate.
I have created a "cross-account" role in account A with trust relations to account B, also I have attached the "AmazonEC2ContainerRegistryPowerUser" policy to this role.
I gave access to the ECR repository in account A by adding account B's id and the "cross-account" role to the repository policy.
I attached a policy to the fargate "TaskExecutionRole" allowing fargate to assume the "cross-account" role.
When trying to deploy a Fargate task in account B with a reference to an image in account A I'm getting a 500 error.
Fargate will not automatically assume a cross-account role. Fortunately, you do not need to assume a role in another account in order to pull images from that account's ECR repository.
To enable cross-account access to an image in ECR, add access for account B in account A's repository (by setting the repository policy), and then specify a TaskExecutionRole in account B that has permissions to pull from ECR ("ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "ecr:BatchCheckLayerAvailability").
For example, set a repository policy on the repository in account A like the following:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowCrossAccountPull",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_B_ID:root"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage"
]
}
]
}
Then, set your TaskExecutionRole in account B to have a policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}
Alternately, you can use the managed policy AmazonECSTaskExecutionRolePolicy for your TaskExecutionRole instead of defining your own.
I tried to upload image using aws-sdk, multer-s3.
In my local environment, uploading image was succeed, but in production environment(aws lambda), it fail with error status 403 forbidden.
But my aws credential key and secret-key is same as local environment. also i checked aws key in production environment successfully.
I think difference between two other environment is nothing.What am I missing?
I have even tried setting aws key in my router code like below, but it also failed.
AWS.config.accessKeyId = 'blabla';
AWS.config.secretAccessKey = 'blalbla';
AWS.config.region = 'ap-northeast-2';
and here is my policy
{
"Id": "Policy1536755128154",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1536755126539",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::elebooks-image/*",
"Principal": "*"
}
]
}
Update your attached s3 bucket policy to a user according to below policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET",
"arn:aws:s3:::YOUR-BUCKET/*"
]
}
]
}
it's working on my server.
I haven't worked with AWS Lambda but I am familiar with S3. When you're using the AWS SDK in your local environment, you're probably using the root user with default full access, so it will just work.
With Lambda however, according to the following extract from the documentation, you need to make sure that the IAM role you specified when you created the Lambda function has the appropriate permissions to do an s3:putObject to that bucket.
Permissions for your Lambda function – Regardless of what invokes a Lambda function, AWS Lambda executes the function by assuming the IAM role (execution role) that you specify at the time you create the Lambda function. Using the permissions policy associated with this role, you grant your Lambda function the permissions that it needs. For example, if your Lambda function needs to read an object, you grant permissions for the relevant Amazon S3 actions in the permissions policy. For more information, see Manage Permissions: Using an IAM Role (Execution Role).
See Writing IAM policies: How to grant access to an S3 bucket
I am trying to download AWS Codedeploy Agent file in my Amazon Linux. I followed instructions as mentioned in http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html, for Amazon Linux, have created appropriate instance profile, service role etc. Everything is latest (Amazon Linux, CLI Packages, it is a brand new instance and I have tried this with at least 3 more brand new instances with same result). All instances have full outbound internet access.
But following statement for downloading install from S3 always fails,
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
With Error,
A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
Can anyone help me with this error?
I figured out the problem, According to Codedeploy documentation for IAM Instance profile
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html
following permissions needs to be given to your IAM instance profile.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
But I limited the resource to my code bucket since I don't want my instances to access other buckets directly. But turns out I also need to give additional permission for aws-codedeploy-us-east-1/* s3 resource for being able to download the agent. This is not very clear in the document for setting up IAM instance profile for Codedeploy.
More restrictive policy that works:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::aws-codedeploy-us-east-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-2/*",
"arn:aws:s3:::aws-codedeploy-ap-south-1/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-1/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-1/*",
"arn:aws:s3:::aws-codedeploy-eu-central-1/*",
"arn:aws:s3:::aws-codedeploy-eu-west-1/*",
"arn:aws:s3:::aws-codedeploy-sa-east-1/*"
]
}
]
}