AWS Redshift Serverless: `ERROR: Not authorized to get credentials of role` - amazon-web-services

I've created a serverless Redshift instance, and I'm trying to import a CSV file from an S3 bucket.
I've made an IAM role with full Redshift + Redshift serverless access and S3 Read access, and added this role as a Default Role under the Permissions settings of the Serverless Configuration. Basically, I've tried to do anything that I thought should be necessary according to the documentation.
However, there docs are only targeted at the normal EC2 hosted Redshift for now, and not for the Serverless edition, so there might be something that I've overlooked.
But when I try running a COPY command (generated by the UI), I get this error:
ERROR: Not authorized to get credentials of role arn:aws:iam::0000000000:role/RedshiftFull Detail: ----------------------------------------------- error: Not authorized to get credentials of role arn:aws:iam::00000000:role/RedshiftFull code: 30000 context: query: 18139 location: xen_aws_credentials_mgr.cpp:402 process: padbmaster [pid=8791] ----------------------------------------------- [ErrorId: 1-61dc479b-570a4e96449b228552f2c911]
Here's the command I'm trying to run:
COPY dev."test-schema"."transactions" FROM 's3://bucket-name/something-1_2021-11-01T00_00_00.000Z_2022-01-03.csv' IAM_ROLE 'arn:aws:iam::0000000:role/RedshiftFull' FORMAT AS CSV DELIMITER ',' QUOTE '"' REGION AS 'eu-central-1'
Here's the Role
{
"Role": {
"Path": "/",
"RoleName": "RedshiftFull",
"RoleId": "AROA2PAMxxxxxxx",
"Arn": "arn:aws:iam::000000000:role/RedshiftFull",
"CreateDate": "2022-01-10T13:55:03+00:00",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"redshift.amazonaws.com",
"sagemaker.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Description": "Allows Redshift clusters to call AWS services on your behalf.",
"MaxSessionDuration": 3600,
"RoleLastUsed": {}
}
}
{
"AttachedPolicies": [
{
"PolicyName": "redshift-serverless",
"PolicyArn": "arn:aws:iam::719432241830:policy/redshift-serverless"
},
{
"PolicyName": "AmazonRedshiftFullAccess",
"PolicyArn": "arn:aws:iam::aws:policy/AmazonRedshiftFullAccess"
},
{
"PolicyName": "AmazonS3ReadOnlyAccess",
"PolicyArn": "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
]
}
The redshift-serverless policy is here:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "redshift-serverless:*",
"Resource": "*"
}
]
}

Related

aws assume role on shell script does not assume the role

I'm following a tutorial where I assume roles on EC2 instances so it has access to services. I'm following this tutorial (https://medium.com/swlh/aws-iam-assuming-an-iam-role-from-an-ec2-instance-882081386c49), but I'm stuck on the cli part when I start running a "aws s3 ls". Just so I know it works, I gave the AssumedRole all access to s3.
This is my setup:
ImplicitRole // will be attached to EC2
policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::xxx:role/*"
}
]
}
AssumedRole // will be the role assumed once logged in to EC2
policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
What happens is I would run the following on cli
aws sts assume-role --role-arn arn:aws:iam::****:role/AssumedRole --role-session-name test-session
Then I will get the Credentials
{
"Credentials": {
"AccessKeyId": "key",
"SecretAccessKey": "secret",
"SessionToken": "longtoken",
"Expiration": "2020-07-08T07:29:51+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "AKDIEEOOKDLKSDJFDJ:test-session",
"Arn": "arn:aws:sts::xxxx:assumed-role/AssumedRole/test-session"
}
}
and then will update the variables eg: set AWS_ACCESS_KEY_ID=XXXX , etc
Once done I run the following but it would give me ListObject AccessDenied
aws s3 ls s3://bucket
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

Terraform: Issue with assume_role

I'm trying to solve this mystery for few days now, but no joy. Basically, Terraform cannot assume role and failing with:
Initializing the backend...
2019/10/28 09:13:09 [DEBUG] New state was assigned lineage "136dca1a-b46b-1e64-0ef2-efd6799b4ebc"
2019/10/28 09:13:09 [INFO] Setting AWS metadata API timeout to 100ms
2019/10/28 09:13:09 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2019/10/28 09:13:09 [INFO] AWS Auth provider used: "SharedCredentialsProvider"
2019/10/28 09:13:09 [INFO] Attempting to AssumeRole arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np (SessionName: "terra_cnp", ExternalId: "", Policy: "")
Error: The role "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np" cannot be assumed.
There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
In AWS:
I have role: terraform-admin-np with 2 AWS managed policy: AmazonS3FullAccess & AdministratorAccess and a trust relationship with this:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": "sts:AssumeRole"
}
]
}
Then I have an user with policy document attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TfFullAccessSts",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"sts:DecodeAuthorizationMessage",
"sts:AssumeRoleWithSAML",
"sts:AssumeRoleWithWebIdentity"
],
"Resource": "*"
},
{
"Sid": "TfFullAccessAll",
"Effect": "Allow",
"Action": "*",
"Resource": [
"*",
"arn:aws:ec2:region:account:network-interface/*"
]
}
]
}
and a S3 bucket: txxxxxxxxxxxxxxte with this policy document attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TFStateListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::txxxxxxxxxxxxxxte"
},
{
"Sid": "TFStateGetPutObject",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::txxxxxxxxxxxxxxte/*"
}
]
}
In Terraform:
The snippet from the provider.tf:
###---- Default Backend and Provider config values -----------###
terraform {
required_version = ">= 0.12"
backend "s3" {
encrypt = true
}
}
provider "aws" {
region = var.region
version = "~> 2.20"
profile = var.profile
assume_role {
role_arn = var.role_arn
session_name = var.session_name
}
}
Snippet from tgw_cnp.tfvars backend config:
## S3 backend config
key = "backend/tgw_cnp_state"
bucket = "txxxxxxxxxxxxxxte"
region = "us-east-2"
profile = "local-tgw"
role_arn = "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np"
session_name = "terra_cnp"
and then running this way:
TF_LOG=debug terraform init -backend-config=tgw_cnp.tfvars
With this, I can assume role using AWS CLI without any issue:
# aws --profile local-tgw sts assume-role --role-arn "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np" --role-session-name AWSCLI
{
"Credentials": {
"AccessKeyId": "AXXXXXXXXXXXXXXXXXXA",
"SecretAccessKey": "UixxxxxxxxxxxxxxxxxxxxxxxxxxxxMt",
"SessionToken": "FQoGZXIvYXdzEJb//////////wEaD......./5LFwNWf6riiNw9vtBQ==",
"Expiration": "2019-10-28T13:39:41Z"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROA2P7ZON5TSWMOBQEBC:AWSCLI",
"Arn": "arn:aws:sts::72xxxxxxxxxx:assumed-role/terraform-admin-np/AWSCLI"
}
}
but terraform fails with the above error. Any idea what's I'm doing wrong?
Okay, answering to my own question........
It worked now. I have had a silly mistake - the region in tgw_cnp.tfvars was wrong, which I was keep missing out. In AWS CLI as I didn't have to specify the region (it was getting it from the profile), so it worked without any issue but in TF I specified the region and the value was wrong, hence the failure. The suggestions in the error reporting was a bit misleading.
I can confirm the above config works fine. It's all good now.

How to solve the error ""Message":"User: anonymous is not authorized to perform: iam:PassRole on resource"

I am trying to register a snapshot for my elasticsearch on AWS. My goal is to create a snapshot of elasticsearch domain on a s3 bucket. Below is the command I am using:
curl -XPUT https://vpc-xxxxxxx.ap-southeast-2.es.amazonaws.com/_snapshot/es-snapshot -d '
{
"type": "s3",
"settings": {
"bucket": "$MY_BUCKET",
"region": "ap-southeast-2",
"role_arn": "arn:aws:iam::xxxx:role/es-snapshot-role"
}
}'
But I got this error:
{"Message":"User: anonymous is not authorized to perform: iam:PassRole on resource: arn:aws:iam::xxxx:role/es-snapshot-role"}
It seems like a role permission issue. I have configured the role policy as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:*",
"s3:*",
"iam:PassRole",
"es:ESHttpPut"
],
"Resource": [
"*"
]
}
]
}
And its trust relationship is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "es.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I wonder what else I missed here.
This post AccessDenied for EC2 Instance with attached IAM Role doesn't seem to relate to my issue.
Registering a Manual Snapshot Repository
You must register a snapshot repository with Amazon Elasticsearch Service before you can take manual index snapshots. This one-time operation requires that you sign your AWS request with credentials that are allowed to access TheSnapshotRole, as described in Manual Snapshot Prerequisites.
You can't use curl to perform this operation, because it doesn't support AWS request signing. Instead, use the sample Python client, Postman, or some other method to send a signed request to register the snapshot repository. The request takes the following form:
PUT elasticsearch-domain-endpoint/_snapshot/my-snapshot-repo
{
"type": "s3",
"settings": {
"bucket": "s3-bucket-name",
"region": "region",
"role_arn": "arn:aws:iam::123456789012:role/TheSnapshotRole"
}
}
Reference from AWS Documentation: Working with Amazon Elasticsearch Service Index Snapshots
Add iam:PassRole permissions to your IAM user and try the command,

AWS Redshift: Masteruser not authorized to assume role

I created a cloudformation stack with redshift cluster and a masteruser: testuser
"RedshiftCluster" : {
"IamRoles" : [
{
"Fn::GetAtt": [
"IAMInstanceRole",
"Arn"
]
}
]
... other configurations
It uses the below IAM role (IAMInstanceRole) which is in in-sync status and the redshift cluster is up and running:
"IAMInstanceRole": {
"Properties": {
"RoleName": "test-iam-role",
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": [
"sts:AssumeRole"
],
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"redshift.amazonaws.com",
"s3.amazonaws.com"
]
}
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "root",
"PolicyDocument": {
"Version" : "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
}
]
}
I'm trying to load csv file from s3 to redshift using copy command and iam_role as credential. The iam_role has the arn of IAMInstanceRole (declared above).
Whenever I execute the below command:
copy test_table from 's3://test-bucket/test.csv' CREDENTIALS 'aws_iam_role=arn:aws:iam::<account-id>:role/test-iam-role' MAXERROR 100000 removequotes TRIMBLANKS emptyasnull blanksasnull delimiter '|';
I get the error:
ERROR: User arn:aws:redshift:us-west-2:189675173661:dbuser:automated-data-sanity-redshiftcluster-fbp9fgls6lri/sanityuser is not authorized to assume IAM Role arn:aws:iam::189675173661:role/sanity-test-iam-instance-role
DETAIL:
-----------------------------------------------
error: User arn:aws:redshift:us-west-2:<account-id>:dbuser:test-redshiftcluster-fbp9fgls6lri/testuser is not authorized to assume IAM Role arn:aws:iam::<account-id>:role/test-iam-role
code: 8001
context: IAM Role=arn:aws:iam::<account-id>:role/test-iam-role
query: 1139
location: xen_aws_credentials_mgr.cpp:236
process: padbmaster [pid=29280]
-----------------------------------------------
Please suggest some resolution.
I ran into the same problem but after a good 1 hour of troubleshooting, I realised I had failed to add the Redshift role to the cluster while I was creating it. If you select the cluster from Redshift, choose the drop-down on 'Actions' and select 'Manage IAM roles' from there you will be able to attach the Redshift role you may have created for this cluster.
That solved the problem for me, anyways.
Hope this helps.
I resolved this issue !!
By default, IAM roles that are available to an Amazon Redshift cluster are available to all users on that cluster. You can choose to restrict IAM roles to specific Amazon Redshift database users on specific clusters or to specific regions.
To permit only specific database users to use an IAM role, take the following steps.
To identify specific database users with access to an IAM role
Identify the Amazon Resource Name (ARN) for the database users in your Amazon Redshift cluster. The ARN for a database user is in the format: arn:aws:redshift:region:account-id:dbuser:cluster-name/user-name.
Open the IAM Console at url="https://console.aws.amazon.com/.
In the navigation pane, choose Roles.
Choose the IAM role that you want to restrict to specific Amazon Redshift database users.
Choose the Trust Relationships tab, and then choose Edit Trust Relationship. A new IAM role that allows Amazon Redshift to access other AWS services on your behalf has a trust relationship as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Add a condition to the sts:AssumeRole action section of the trust relationship that limits the sts:ExternalId field to values that you specify. Include an ARN for each database user that you want to grant access to the role.
For example, the following trust relationship specifies that only database users user1 and user2 on cluster my-cluster in region us-west-2 have permission to use this IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": [
"arn:aws:redshift:us-west-2:123456789012:dbuser:my-cluster/user1",
"arn:aws:redshift:us-west-2:123456789012:dbuser:my-cluster/user2"
]
}
}
}]
}
7.Choose Update Trust Policy.
Here's a template that works fine:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"RedshiftRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "Redshift-Role",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"redshift.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
}
]
}
},
"RedshiftSG": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupName": "Redshift Security Group",
"GroupDescription": "Enable access to redshift",
"VpcId": "vpc-11223344",
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": 5439,
"ToPort": 5439,
"CidrIp": "0.0.0.0/0"
}
],
"Tags": [
{
"Key": "Name",
"Value": "Redshift Security Group"
}
]
}
},
"RedshiftCluster": {
"Type": "AWS::Redshift::Cluster",
"Properties": {
"ClusterType": "single-node",
"NodeType": "dc2.large",
"MasterUsername": "master",
"MasterUserPassword": "YourPassword",
"IamRoles": [
{
"Fn::GetAtt": [
"RedshiftRole",
"Arn"
]
}
],
"VpcSecurityGroupIds": [
{
"Ref": "RedshiftSG"
}
],
"PubliclyAccessible": true,
"Port": 5439,
"DBName": "foo"
}
}
}
}
Be sure to insert your own VpcId in the security group.
The Role can be assumed by Redshift and grants access to s3:* (which you should reduce in scope).
I was trying to access Glue data catalog from Redshift. I created the role with the necessary policies attached (AWSGlueServiceRole, AmazonS3FullAccess), and added it to the cluster. However, I had set the AWS service as Glue but it should've been Redshift since Redshift is the service needing the access. Attaching these policies the Redshift role I have (and adding the role to the cluster, if necessary) solved the problem for me.
Resolved it
Complete Steps followed :
Create s3 bucket in the same region as redshift ( move-redshift-data)
create a folder inside it. ( move-redshift-data)
create an IAM role (move-redshift-data-role) ,attach S3Fullaccesss and add the following to trust relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::126111577039:root”
},
"Action": "sts:AssumeRole"
}
]
}
where 126111577039 is the account ID of redshift cluster
Find you already attached role to your cluster
Open your redshift cluster
Click on actions -->Manage IAM roles
You could see the role (mine is RedshiftDynamoDBAccess)
Open the role in IAM console and attach the following inline policy to it .
Add below policy to the role already associated to redshift cluster (See in manage cluster)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": "arn:aws:iam::888850378087:role/move-redshift-data-role"
}
]
}
where 888850378087 : account which has s3 bucket in it and move-redshift-data role
Finally Run command
unload ('select * from sellercompliancestate')
to 's3://unload-swarnimg/unload-swarnimg/'
iam_role 'arn:aws:iam::126111577039:role/RedshiftDynamoDBAccess,arn:aws:iam::888850378087:role/move-redshift-data-role'
allowoverwrite
format as csv;
Got the solution after searching for a while. I created separate IAM role for redshift as John suggested, which is a correct advice but was not the issue in my case.
Then followed the thread to resolve the issue: Copy from remote S3 using IAM Role - not authorized to assume IAM Role
I'd to activate the region where my cluster was in Account Settings
I solved by editing function like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"sagemaker.amazonaws.com",
"redshift.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
I add "sagemaker.amazonaws.com" to my function AmazonRedshiftML
Function
I figured it out.
There is no use of deleting a cluster, rebooting or managing IAM Roles in redshift Cluster.
Though I did all above many times, still I was getting the error. Then I tried below steps.
Give Access Key Id and secret Key in COPY command, instead of IAM role. Example as below.
copy users from 's3://awssampledbuswest2/tickit/allusers_pipe.txt'
credentials 'aws_access_key_id=;SKDFHSJKD;aws_secret_access_key=SDJHFJHajhsdjh'
delimiter '|' region 'us-west-2';

AWS Codepipeline with a Codecommit targetsource repository from another account

Is it possible to create a codepipeline that has a target source of a CodeCommit Repository in another account?
I just had to do this, I'll explain the process.
Account C is the account with your CodeCommit repository.
Account P is the account with your CodePipeline... pipelines.
In Account P:
Create an AWS KMS Encryption Key and add Account C with having access (guide here in pre-requisite step). You will also need to add the CodePipeline role, and if you have a CodeBuild and CodeDeploy step add those roles too.
In your CodePipeline artifacts S3 bucket you need to add Account C access. Go to the Bucket Policy and add:
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTC_ID:root"
},
"Action": [
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTC_ID:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME"
}
Change ACCOUNTC_ID to the account ID of Account C, and change YOUR_BUCKET_NAME to the CodePipeline artifact S3 bucket name.
Add a policy to your CodePipeline service role so you can get access to Account C and the CodeCommit repositories:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::ACCOUNTC_ID:role/*"
]
}
}
Again, change ACCOUNTC_ID to the account ID of Account C.
In Account C:
Create an IAM Policy that lets Account P to access the CodeCommit resources and also the KMS key so it can encrypt them with the same key as the rest of your CodePipeline:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject*",
"s3:PutObject",
"s3:PutObjectAcl",
"codecommit:ListBranches",
"codecommit:ListRepositories"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME_IN_ACCOUNTP_FOR_CODE_PIPELINE/*"
]
},
{
"Effect": "Allow",
"Action": [
"kms:DescribeKey",
"kms:GenerateDataKey*",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:YOUR_KMS_ARN"
]
}
]
}
Replace bucket name and KMS ARN in the above policy. Save the policy as something like CrossAccountPipelinePolicy.
Create a role for cross account access and attach the above policy as well as the AWSCodeCommitFullAccess policy. Make sure to make the Trusted entity as the account ID of Account P.
In AWS CLI
You can't do this bit in the console so you have to use the AWS CLI. This will be to get your CodePipeline in AccountP to assume the role in the Source step and dump it in the S3 bucket for all your next steps to use.
aws codepipeline get-pipeline --name NameOfPipeline > pipeline.json
Modify the pipeline json so it looks a bit like this and replace the bits that you need to:
"pipeline": {
"name": "YOUR_PIPELINE_NAME",
"roleArn": "arn:aws:iam::AccountP_ID:role/ROLE_NAME_FOR_CODE_PIPELINE",
"artifactStore": {
"type": "S3",
"location": "YOUR_BUCKET_NAME",
"encryptionKey": {
"id": "arn:aws:kms:YOUR_KMS_KEY_ARN",
"type": "KMS"
}
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeCommit",
"version": "1"
},
"runOrder": 1,
"roleArn": "arn:aws:iam::AccountC_ID:role/ROLE_NAME_WITH_CROSS_ACCOUNT_POLICY",
"configuration": {
"BranchName": "master",
"PollForSourceChanges": "false",
"RepositoryName": "YOURREPOSITORYNAME"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"inputArtifacts": []
}
]
},
Update the pipeline with aws codepipeline update-pipeline --cli-input-json file://pipeline.json
Verify it works by running the pipeline.
You can deploy resources using pipeline with codecommit repository in another account.
Let's say you have Account A where your codecommit repository sits, and Account B where you codepipeline sits.
Configure the following in account B:
You would need to create custom KMS key because AWS Default Key does not have an associated Key policy. You can use Create a Pipeline in CodePipeline That Uses Resources from Another AWS Account if you need assistance with creating CMK. Add the Codepipeline service role to the KMS Key Policy to allow the codepipeline to use it.
Event bus for receiving events from cross account Go to CloudWatch → Event Buses under Events section → Add Permission → Enter DEV AWS Account Id → Add. For more details, check Creating an Event Bus
Add the following Policy to S3 pipeline Artifact store:
{
“Version”: “2012–10–17”,
“Id”: “PolicyForKMSAccess”,
“Statement”: [
{ “Sid”: “AllowAccessFromAAccount”,
“Effect”: “Allow”,
“Principal”: { “AWS”: “arn:aws:iam::ACCOUNT_A_ID:root” },
“Action”: [ “s3:Get*”, “s3:Put*”, "s3:ListBucket ],
“Resource”: “arn:aws:s3:::NAME-OF-THE-BUCKET/*” }
]
}
Edit the Pipeline IAM rols to assume role to Account A as follows:
{
“Version”:“2012–10–17”,
“Statement”:{
“Effect”:“Allow”,
“Action”:“sts:AssumeRole”,
“Resource”:[
“arn:aws:iam::ACCOUNT_A_ID:role/*
]
}
}
Create a CloudWatch Event Rule to trigger the pipeline on master branch of the CodeCommit in account A. Add CodePipeline's ARN as a target of this rule.
Now, do the following in Account A:
Create a cross account IAM role with 3 policies.
a) AWSCodeCommitFullAccess
b) Inline Policy to assume role to Account B as follows:
{
“Version”:“2012–10–17”,
“Statement”:[
{
“Effect”:“Allow”,
“Principal”:{
“AWS”:“arn:aws:iam::ACCOUNT_B_ID:root”
},
“Action”:“sts:AssumeRole”
}
]
}
c)Inline policy for KMS, CodeCommit and S3 access:
{
“Version”:“2012–10–17”,
“Statement”:[
{
“Effect”:“Allow”,
“Action”:[
“s3:Get*”,
“s3:Put*”,
“codecommit:*”
],
“Resource”:[
“arn:aws:s3:::YOUR_BUCKET_NAME_IN_B_FOR_CODE_PIPELINE_ARTIFACTS/”
]
},
{
“Effect”:“Allow”,
“Action”:[
“kms:*" ],
“Resource”: [ “arn:aws:kms:YOUR_KMS_ARN_FROM_B_ACCOUNT” ] } ] }
2. Update your pipeline as #Eran Medan suggested.
For more details, please visit AWS CodePipeline with a Cross-Account CodeCommit Repository
Also, please note that I have given a lot more permissions than required for example codecommit:* and kms:*, you can alter them as per your needs.
I hope this will help.
Yes, it should be possible. Follow these instructions: http://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html