The application is deployed in Fargate, and need to consume message from TRUCKDIM.fifo. To allow this, I granted all permissions to ecs-task-role.
The code looks like this in terraform, I am allowing the ecs-task-role to access the fifo queue TRUCKDIM.fifo with all the permissions.
resource "aws_sqs_queue" "queue_fifo-01" {
name = var.name
fifo_queue = var.fifo_queue
fifo_throughput_limit = var.fifo_throughput_limit
deduplication_scope = var.deduplication_scope
content_based_deduplication = var.content_based_deduplication
delay_seconds = var.delay_seconds
max_message_size = var.max_message_size
message_retention_seconds = var.message_retention_seconds
receive_wait_time_seconds = var.receive_wait_time_seconds
visibility_timeout_seconds = var.visibility_timeout_seconds
kms_master_key_id = var.kms_master_key_id
kms_data_key_reuse_period_seconds = var.kms_data_key_reuse_period_seconds
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.queue_fifo-02.arn
maxReceiveCount = 10
})
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "Policy1676302010732",
"Statement": [
{
"Sid": "Stmt1676302006390",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::995556942157:role/service-role/ecs-task-role"
},
"Action": "sqs:*",
"Resource": "arn:aws:sqs:us-west-2:995556942157:TRUCKDIM.fifo"
}
]
}
POLICY
}
N.B: The role ecs-task-role is created via terraform.
When I run the terraform plan, the principal is correctly set
~ Principal = {
~ AWS = "AROA78POOLYTRE11KEEZA" -> "arn:aws:iam::995556942157:role/service-role/ecs-task-role"
}
After running terraform apply, and then check in the AWS console of the queue TRUCKDIM.fifo, the principal has changed to a string (assumed_role_id) "AROA6OPZZLYFIE6IYBEF4"
{
"Version": "2012-10-17",
"Id": "Policy1676302010732",
"Statement": [
{
"Sid": "Stmt1676302006390",
"Effect": "Allow",
"Principal": {
"AWS": "AROA6OPZZLYFIE6IYBEF4"
},
"Action": "sqs:*",
"Resource": "arn:aws:sqs:us-west-2:995556942157:TRUCKDIM.fifo"
}
]
}
Does someone know why, terraform is replacing the arn of the ecs-task-role to the assumed_rome_id ?
I am getting access denied to the TRUCKDIM.fifo in the log.
If I copy-paste the role directly in the AWS console, everything works.
It seems like this API is interpreting the ARN you provided and then replacing it with an equivalent Unique Identifier.
The difference between the two is:
An ARN uses the "friendly name" of an IAM object -- ecs-task-role in your case -- which is user-friendly but could potentially change meaning if you were to later destroy this role and then create a new role with the same name.
A unique ID is generated automatically by the IAM API and guaranteed to be unique across all objects that will ever exist. Each role you create will have a unique ID and if you delete that role and recreate one with the same name the new role will then have a distinct unique ID.
You can read more about these different identifier types in IAM identifiers.
My sense of what happened here is that the SQS API parsed your policy after Terraform submitted it and it noticed your ARN arn:aws:iam::995556942157:role/service-role/ecs-task-role and so made a query to the IAM API to find out whether there is a role named ecs-task-role. After looking it up, SQS now also knows the unique ID of this object and it seems to have stored that unique ID instead of the ARN you originally submitted, presumably so that it can "lock in" this particular role and not be tricked into using a different role if you were to delete ecs-task-role and make a new role of the same name later.
Unfortunately Terraform's AWS provider is not aware of this transformation and so from the provider's perspective this seems like the object was edited outside of Terraform and no longer matches the configuration. Although Terraform providers will typically notice when two values are equivalent, in this case that's harder because it would require the AWS provider to query the IAM API to determine whether role AROA6OPZZLYFIE6IYBEF4 has the name ecs-task-role.
Therefore I think the only way to make this configuration converge (that is: not continually propose to change the unique ID back into an ARN) would be to write the unique ID into the IAM policy instead.
One way to achieve that without hard-coding the unique ID would be to use the aws_iam_role data source to ask the IAM API to return the unique ID and then pass that value into your policy, like this:
data "aws_iam_role" "ecs_task" {
name = "ecs-task-role"
}
resource "aws_sqs_queue" "queue_fifo-01" {
# ...
policy = jsonencode({
# ...
Principal = {
AWS = data.aws_iam_role.ecs_task.unique_id
}
# ...
})
}
This unique_id attribute is defined by the aws_iam_role data source in the AWS provider to return the unique ID for the requested object. That should then cause the generated policy to include AROA6OPZZLYFIE6IYBEF4 instead of arn:aws:iam::995556942157:role/service-role/ecs-task-role and it will therefore match the way that the SQS API has stored this policy and therefore allow your configuration to converge.
(Note: I showed using jsonencode to generate the JSON here, instead of a <<POLICY "heredoc" template, because the policy content is now dynamic based and so using Terraform's JSON encoder is a robust way to ensure that the result will always be valid JSON without the need to do any special escaping. The details of that are beyond the scope of this question but if you'd like to learn more about that function please refer to its documentation.)
Related
How to assume a role from another role in the same account.
Below is my first IAM role(roleA) to access sagemaker. one statement to allow access to sagemaker and another to allow assumerole.
statement {
actions = [
"sagemaker:*",
]
resources = [
"arn:aws:sagemaker:eu-west-1:1111111111:endpoint/ep",
]
}
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = ["*"]
}
}
Now I have another IAM in the same AWS account(roleB).
{
"Sid": "",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::1111111111:role/roleA"
}
Now I assigned roleB to a Microservice. My understanding is Microservice should have access to sagemaker endpoint ep. But I am getting error that I don't have permission. where I am going wrong?
To make this work, you have to do something like the below in your microservice. As #jarmod already explained, you need to use the credentials and create clients.
source_credentials = sts_client.assume_role(
RoleArn=roleB_ARN,
RoleSessionName=`session_name`,
)
dest_sts_client = boto3.client(
'sts',
aws_access_key_id=source_credentials.get('Credentials').get('AccessKeyId'),
aws_secret_access_key=source_credentials.get('Credentials').get('SecretAccessKey'),
aws_session_token=source_credentials.get('Credentials').get('SessionToken')
)
dest_credentials = sts_client.assume_role(
RoleArn=roleA_ARN,
RoleSessionName=`session_name`,
)
sagemaker_client = boto3.client(
'sagemaker',
aws_access_key_id=dest_credentials.get('Credentials').get('AccessKeyId'),
aws_secret_access_key=dest_credentials.get('Credentials').get('SecretAccessKey'),
aws_session_token=dest_credentials.get('Credentials').get('SessionToken')
)
AWS STS Role Chaining
Roles terms and concepts
I am trying to set up my current infrastructure in Terraform (v 0.13.0). I am simply starting with migrating existing lambda functions. I have used the following code to try upload an existing lambda function in .net core 3.1 to AWS (provider v. 3.0). I have no issue to deploy this manually but this is obviously not the goal.
Here is the IAM role:
resource "aws_iam_role" "role_lambda" {
name = "roleLambda"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
Below the function (note I have obfuscated some values):
resource "aws_lambda_function" "lambda_tf" {
function_name = "LambdaTFTest"
role = aws_iam_role.role_lambda.arn
handler = "Lambda::Lambda.Function::FunctionHandler"
runtime = "dotnetcore3.1"
s3_bucket = "arn:aws:s3:::xxxx-xxxxxx"
s3_key = "Lambda.zip"
s3_object_version = "XxXxXxXxXxXxXxXxXxXxXxXxXxXx"
}
However I keep getting this error as an output with no more details:
Error: Error creating Lambda function: ValidationException:
status code: 400, request id: a5e89c38-d1f1-456d-93c1-41650fb45386
I already made sure that my lambda is deployed within the same region as the s3 bucket itself so this is not the issue. I thought this could be related to some invalid parameters but I have played with all of them and can't manage to find the problem. I have also double checked the correct spelling of the key, version and so on. How can I make progress on this ?
Thanks in advance for your help.
This issue is caused by low values of timeout or using role name instead of role ARN. I changed from:
role = aws_iam_role.lambda_role.name
to
role = aws_iam_role.lambda_role.arn
And the function deployment was successful.
The aws_iam_role has a syntax error. There is missing - in front of POLICY if you want it to keep it tabbed:
resource "aws_iam_role" "role_lambda" {
name = "roleLambda"
assume_role_policy = <<-POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
In aws_lambda_function, the s3_bucket should be just bucket name, not its arn:
resource "aws_lambda_function" "lambda_tf" {
function_name = "LambdaTFTest"
role = aws_iam_role.role_lambda.arn
handler = "Lambda::Lambda.Function::FunctionHandler"
runtime = "dotnetcore3.1"
s3_bucket = "xxxx-xxxxxx"
s3_key = "Lambda.zip"
s3_object_version = "XxXxXxXxXxXxXxXxXxXxXxXxXxXx"
}
This comes down to one of the parameters being passed in being invalid.
Ensure that the Lambda name is unique, the S3 bucket and key exist and that the IAM role has the assume role policy when it’s attached.
The runtime is correct, everything else is user defined so would need you to validate.
Try using filename property instead of S3 (this will use local disk instead of S3). Does that work? If so it might be S3 permissions.
If you verify everything and it’s still not working the best suggestion would be to raise with AWS support providing the request ID.
It could really be any of the parameters you pass to lambda resource. In my case I said the timeout was "900000" instead of 900. I assumed it to be in ms for some reason.
In my case it was the name of lambda function. I was using spacing and its not allowed.
The s3_bucket should only include the name, like xxxx-xxxxxx
The following formats are wrong:
arn:aws:s3:::xxxx-xxxxxx or
s3://xxxx-xxxxxx
For those who might have run into the same issue, it might help to try formatting your main.tf file by converting all spaces to tabs.
If you're using vscode, there is a tab below to convert this, depends if spaces or tabs
Below:
Convert Indentation to Tabs:
This fixed the issue for me.
I actually got the same error when using a docker image. The fix here is to set the package_type = "Image"
For me it was the lambda description being too long.
There is a bug with allocating memory more than 4096 so if you copy the example from the terraform docs it will fail.
This does not happen on all AWS account but on some
While creating cloudfront distribution through aws console, we have an option to choose an origin access identity and also, let it update the bucket policy.
I am trying to look for similar options in terraform so that I don't have to manually manage the s3 bucket read permissions for cloudfront origin access identity.
I have checked https://www.terraform.io/docs/providers/aws/r/cloudfront_distribution.html but couldn't find any reference to such option.
Please let me know if I missed checking something on the page.
I don't think you missed anything on that page. But, you also need to look at this page: https://www.terraform.io/docs/providers/aws/r/s3_bucket.html .
This page covers more detail on setting up S3 buckets. Note the policy line in the Static Website Hosting section. You can add a line like
policy = "${file("policy.json")}"
and then you can write whatever policy you need into the policy.json file, which then will be included and thereby allow you to avoid needing to manually configure permissions in the console.
After reading the responses here and doing some reading and tests on my end, I found that the following achieves the effect we want. Assuming you already have your Cloudfront distribution somewhere:
resource "aws_s3_bucket" "my-cdn-s3" {
bucket = "my-cdn"
}
resource "aws_cloudfront_origin_access_identity" "my-oai" {
comment = "my-oai"
}
resource "aws_s3_bucket_policy" "cdn-cf-policy" {
bucket = aws_s3_bucket.my-cdn-s3.id
policy = data.aws_iam_policy_document.my-cdn-cf-policy.json
}
data "aws_iam_policy_document" "my-cdn-cf-policy" {
statement {
sid = "1"
principals {
type = "AWS"
identifiers = [aws_cloudfront_origin_access_identity.my-cdn-oai.iam_arn]
}
actions = [
"s3:GetObject"
]
resources = [
"${aws_s3_bucket.my-cdn-s3arn}/*"
]
}
}
We would then get this in the bucket's policy, which I have copied from a non-Terraform creation of CF and S3.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-cdn/*"
}
]
}
Let me know if I left anything out.
I'm attempting to create a restrictive SSM role IAM policy that is able to send SNS notifications on failure of SendCommand command executions. I currently have the following policy that gives me "AccessDenied" with no other information (placeholders replaced):
{
"Statement": {
"Effect": "Allow",
"Action": [ "ssm:SendCommand" ],
"Resource": [
"arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*",
"arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:document/${DocumentName}",
"arn:aws:s3:::${S3BucketName}",
"arn:aws:s3:::${S3BucketName}/*",
"arn:aws:iam::${AWS::AccountId}:role/${RoleThatHasSNSPublishPerms}",
"arn:aws:sns:${AWS::RegionId}:${AWS::AccountId}:${SNSTopicName}"
]
}
}
I also have a iam::PassRole permissions for the ${RoleThatHasSNSPublishPerms}. I am invoking it from a lambda using python boto3 in this way:
ssm = boto3.client('ssm')
ssm.send_command(
InstanceIds = [ instance_id ],
DocumentName = ssm_document_name,
TimeoutSeconds = 300,
OutputS3Region = aws_region,
OutputS3BucketName = output_bucket_name,
OutputS3KeyPrefix = ssm_document_name,
ServiceRoleArn = ssm_service_role_arn,
NotificationConfig = {
'NotificationArn': sns_arn,
'NotificationEvents': ['TimedOut', 'Cancelled', 'Failed'],
'NotificationType': 'Command'
}
)
I know that the problem lies with the "Resource" part of my IAM policy because when I change the Resource block to simply "*", the run command executes properly. Also, when I remove the NotificationConfig and ServiceRoleArn parts of my python command, the SendCommand succeeds as well.
I don't want a permissive policy for this lambda role to just execute the command anywhere and on anything. The question is, how do I restrict this policy and still send notifications on failures?
EDIT:
Not sure whether this is new or I just missed it before but AWS posted some instructions on how to narrow down the permissions to only tagged EC2s:
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-rc-setting-up-cmdsec.html
This still doesn't answer the SNS/S3 part of the question, but at least it's a step in the right direction.
I am trying to use Boto3 to create a new instance role that will attach a managed policy only.
I have the following:
Policy Name: my_instance_policy
Policy ARN: arn:aws:iam::123456789012:policy/my_test_policy
I want to create the role called 'my_instance_role' attaching attaching the above policy only.
Boto3 client has the create_role() function like below:
import boto3
client = boto3.client('iam')
response = client.create_role(
Path='string',
RoleName='string',
AssumeRolePolicyDocument='string',
Description='string'
)
Here, I do not see an option to use the policy ARN or name. My understanding is that AssumeRolePolicyDocument variable needs the JSON formatted policy document converted in to text.
Is it possible the way I am looking for?
You would have to create the role (as you are doing above) and then separately attach the managed policy to the role like this:
response = client.attach_role_policy(
RoleName='MyRole', PolicyArn='<arn of managed policy>')
I had a similar question in regard to how to supplying the AssumeRolePolicyDocument when creating an IAM role with boto3.
I used the following code...
assume_role_policy_document = json.dumps({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "greengrass.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
})
create_role_response = self._iam.create_role(
RoleName = "my-role-name",
AssumeRolePolicyDocument = assume_role_policy_document
)
Note that the AssumeRolePolicyDocument is about defining the trust relationship and not the actual permissions of the role you are creating.