I am trying to create storage gateway SMB fileshare for s3 through terraform and i am getting unavailable error.
main.tf
resource "aws_storagegateway_smb_file_share" "sg_fileshare" {
authentication = "ActiveDirectory"
gateway_arn = "arn:aws:storagegateway:${var.region}:${var.account_number}:gateway/${var.gateway_id}"
location_arn = "arn:aws:s3:::${module.migration_bucket.name}"
role_arn = module.sgw_role.role_arn
smb_acl_enabled = true
oplocks_enabled = false
}
when i run this i am getting below error.
Error: error waiting for Storage Gateway SMB File Share
(arn:aws:storagegateway:us-east-1:01243564785:share/share-57dHUID) to
create: unexpected state 'UNAVAILABLE', wanted target 'AVAILABLE'. last error: %!s(<nil>)
sgw_assume_role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "storagegateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
and i have added these policies as well :
["AmazonS3FullAccess",
"AWSGlueConsoleFullAccess",
"AWSLambda_FullAccess", "AmazonSNSFullAccess"]
Related
I have an EMR Application that runs inside a VPC in a Private Subnet with NAT on.
This application can read files from the buckets in the same account, but when it tries to read a file in a bucket in other aws account it gets access denied.
The emr applications has the following policy:
S3FullAccess:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"s3-object-lambda:*"
],
"Resource": "*"
}
]
}
EMR Spark packages settings:
--conf spark.jars.packages=org.apache.hadoop:hadoop-aws:3.2.0
Job Spark:
from pyspark.sql import SparkSession
spark = (SparkSession.builder
.config("spark.hadoop.fs.s3a.fast.upload", True)
.config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
.config("spark.sql.adaptive.enabled", "true")
.config("spark.jars.packages", "org.apache.hadoop:hadoop-aws:3.2.0")
.enableHiveSupport().getOrCreate()
)
df_from_another_s3_account_bucket = spark.read.parquet('s3a://bucket_account_b/path/file.parquet')
This execution is returning the error:
java.nio.file.AccessDeniedException: s3a://bucket_account_b/path/file.parquet: getFileStatus on s3a://bucket_account_b/path/file.parquet: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden;
I have tried using credentials in the spark configs:
.config("fs.s3a.access.key", key_bucket_account_b)
.config("fs.s3a.secret.key", secret_bucket_account_b)
I have also tried creating a bucket policy allowing the emr aws account, but still got the same error.
{
"Version": "2012-10-17",
"Id": "Policy1543283",
"Statement": [
{
"Sid": "Stmt1412820423",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::emr_aws_account:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket_account_b"
}
]
}
What else could I try?
In API Gateway, Im having some issues getting "Aws Services" to show as an option in Integration Type when creating a method within a proxy resource. I can see that this is showing up for normal resource when I don't select the "proxy" checkbox, but if I check that box, and create a get method within the proxy resource, then I don't see that option.
Proxy resources only support HTTP or Lambda integrations.
Here is how you can do this without using a proxy resource: https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-proxy-integrate-service/
Here is how I got it working with proxy resource (NOTE: Im using TF for the IaC):
Then for the policy and role:
resource "aws_iam_policy" "s3_policy" {
name = "s3-policy"
description = "Policy for allowing all S3 Actions"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${local.bucket_name}-root/*"
}
]
}
EOF
}
resource "aws_iam_role" "s3_api_gateyway_role" {
name = "s3-api-gateyway-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "s3_policy_attach" {
role = aws_iam_role.s3_api_gateyway_role.name
policy_arn = aws_iam_policy.s3_policy.arn
}
Problem
I'm trying to enable audit logging on an AWS redshift cluster. I've been following the instructions provided by AWS here: https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-enable-logging
Current Configuration
I've defined the relevant IAM role as follows
resource "aws_iam_role" "example-role" {
name = "example-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
And have granted the following IAM permissions to the example-role role:
{
"Sid": "AllowAccessForAuditLogging",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::example-bucket",
"arn:aws:s3:::example-bucket/*"
]
},
The relevant portion of the redshift cluster configuration is as follows:
resource "aws_redshift_cluster" "example-cluster-name" {
cluster_identifier = "example-cluster-name"
...
# redshift audit logging to S3
logging {
enable = true
bucket_name = "example-bucket-name"
}
master_username = var.master_username
iam_roles = [aws_iam_role.example-role.arn]
...
Error
terraform plan runs correctly, and produces the expected plan based on the above configuration. However, when running terraform apply the following error occurs:
Error: error enabling Redshift Cluster (example-cluster-name) logging: InsufficientS3BucketPolicyFault: Cannot read ACLs of bucket example-bucket-name. Please ensure that your IAM permissions are set up correctly.
note: i've replaced all resource identifiers with example-* resource names and identifiers.
#shimo's answer is correct. I just detail for someone like me
Redshift has full access to S3. But you need add bucket policy too. ( S3's permission)
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::361669875840:user/logs"
},
"Action": [
"s3:GetBucketAcl",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<your-bucket>",
"arn:aws:s3:::<your-bucket>/*"
]
}
- `361669875840` is match with your region check [here][1]
[1]: https://github.com/finos/compliant-financial-infrastructure/blob/main/aws/redshift/redshift_template_public.yml
I'm trying to solve this mystery for few days now, but no joy. Basically, Terraform cannot assume role and failing with:
Initializing the backend...
2019/10/28 09:13:09 [DEBUG] New state was assigned lineage "136dca1a-b46b-1e64-0ef2-efd6799b4ebc"
2019/10/28 09:13:09 [INFO] Setting AWS metadata API timeout to 100ms
2019/10/28 09:13:09 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2019/10/28 09:13:09 [INFO] AWS Auth provider used: "SharedCredentialsProvider"
2019/10/28 09:13:09 [INFO] Attempting to AssumeRole arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np (SessionName: "terra_cnp", ExternalId: "", Policy: "")
Error: The role "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np" cannot be assumed.
There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
In AWS:
I have role: terraform-admin-np with 2 AWS managed policy: AmazonS3FullAccess & AdministratorAccess and a trust relationship with this:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": "sts:AssumeRole"
}
]
}
Then I have an user with policy document attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TfFullAccessSts",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"sts:DecodeAuthorizationMessage",
"sts:AssumeRoleWithSAML",
"sts:AssumeRoleWithWebIdentity"
],
"Resource": "*"
},
{
"Sid": "TfFullAccessAll",
"Effect": "Allow",
"Action": "*",
"Resource": [
"*",
"arn:aws:ec2:region:account:network-interface/*"
]
}
]
}
and a S3 bucket: txxxxxxxxxxxxxxte with this policy document attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TFStateListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::txxxxxxxxxxxxxxte"
},
{
"Sid": "TFStateGetPutObject",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::txxxxxxxxxxxxxxte/*"
}
]
}
In Terraform:
The snippet from the provider.tf:
###---- Default Backend and Provider config values -----------###
terraform {
required_version = ">= 0.12"
backend "s3" {
encrypt = true
}
}
provider "aws" {
region = var.region
version = "~> 2.20"
profile = var.profile
assume_role {
role_arn = var.role_arn
session_name = var.session_name
}
}
Snippet from tgw_cnp.tfvars backend config:
## S3 backend config
key = "backend/tgw_cnp_state"
bucket = "txxxxxxxxxxxxxxte"
region = "us-east-2"
profile = "local-tgw"
role_arn = "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np"
session_name = "terra_cnp"
and then running this way:
TF_LOG=debug terraform init -backend-config=tgw_cnp.tfvars
With this, I can assume role using AWS CLI without any issue:
# aws --profile local-tgw sts assume-role --role-arn "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np" --role-session-name AWSCLI
{
"Credentials": {
"AccessKeyId": "AXXXXXXXXXXXXXXXXXXA",
"SecretAccessKey": "UixxxxxxxxxxxxxxxxxxxxxxxxxxxxMt",
"SessionToken": "FQoGZXIvYXdzEJb//////////wEaD......./5LFwNWf6riiNw9vtBQ==",
"Expiration": "2019-10-28T13:39:41Z"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROA2P7ZON5TSWMOBQEBC:AWSCLI",
"Arn": "arn:aws:sts::72xxxxxxxxxx:assumed-role/terraform-admin-np/AWSCLI"
}
}
but terraform fails with the above error. Any idea what's I'm doing wrong?
Okay, answering to my own question........
It worked now. I have had a silly mistake - the region in tgw_cnp.tfvars was wrong, which I was keep missing out. In AWS CLI as I didn't have to specify the region (it was getting it from the profile), so it worked without any issue but in TF I specified the region and the value was wrong, hence the failure. The suggestions in the error reporting was a bit misleading.
I can confirm the above config works fine. It's all good now.
I am having issues with terraform when I am trying to create an s3 bucket for my elb access_log I get the following error below:
Error applying plan:
1 error(s) occurred:
* module.elb-author-dev.aws_elb.elb: 1 error(s) occurred:
* aws_elb.elb: Failure configuring ELB attributes: InvalidConfigurationRequest: Access Denied for bucket: my-elb-access-log. Please check S3bucket permission
status code: 409, request id: 13c63697-c016-11e7-8978-67fad50955bd
But, If I go to AWS console and manually give permissions to my s3 Public access to everyone. Re-run terraform apply it works fine, please help me resolve this issue.
My main.tf file
module "s3-access-logs" {
source = "../../../../modules/aws/s3"
s3_bucket_name = "my-elb-access-data"
s3_bucket_acl = "private"
s3_bucket_versioning = true
s3_bucket_region = "us-east-2"
}
# elastic load balancers (elb)
module "elb-author-dev" {
source = "../../../../modules/aws/elb"
elb_sgs = "${module.secgrp-elb-nonprod-
author.security_group_id}"
subnets = ["subnet-a7ec0cea"]
application_tier = "auth"
access_logs_enabled = true
access_logs_bucket = "my-elb-access-log"
access_logs_prefix = "dev-auth-elb-access-log"
access_logs_interval = "5"
instances = ["${module.ec2-author-dev.ec2_instance[0]}"]
}
my s3/main.tf
resource "aws_s3_bucket" "s3_data_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "${var.s3_bucket_acl}" #"public"
region = "${var.s3_bucket_region}"
policy = <<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-log/dev-auth-elb/AWSLogs/my_account_id/*",
"Principal": {
"AWS": [
"033677994240"
]
}
}
]
}
EOF
versioning {
enabled = "${var.s3_bucket_versioning}" #true
}
tags {
Name = "${var.s3_bucket_name}"
Terraform = "${var.terraform_tag}"
}
}
My elb.main.tf
access_logs {
enabled = "${var.access_logs_enabled}" #false
bucket = "${var.access_logs_bucket}"
bucket_prefix = "${var.environment_name}-${var.application_tier}-${var.access_logs_prefix}"
interval = "${var.access_logs_interval}" #60
}
AWS Bucket Permissions
You need to grant access to the ELB principal. Each region has a different principal.
Region, ELB Account Principal ID
us-east-1, 127311923021
us-east-2, 033677994240
us-west-1, 027434742980
us-west-2, 797873946194
ca-central-1, 985666609251
eu-west-1, 156460612806
eu-central-1, 054676820928
eu-west-2, 652711504416
ap-northeast-1, 582318560864
ap-northeast-2, 600734575887
ap-southeast-1, 114774131450
ap-southeast-2, 783225319266
ap-south-1, 718504428378
sa-east-1, 507241528517
us-gov-west-1*, 048591011584
cn-north-1*, 638102146993
* These regions require a separate account.
source: AWS access logging bucket permissions
Terraform
In terraform your resource config should look like the example below. You will need your aws account-id and the principal id from the table above:
resource "aws_s3_bucket" "s3_data_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "${var.s3_bucket_acl}"
region = "${var.s3_bucket_region}"
policy =<<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-data/dev-auth-elb/AWSLogs/your-account-id/*",
"Principal": {
"AWS": ["principal_id_from_table_above"]
}
}
]
}
EOF
}
You may need to split the policy out separately rather than keeping it inline as above. In which case you'd need to add a bucket policy resource like this:
resource "aws_s3_bucket_policy" "elb_access_logs" {
bucket = "${aws_s3_bucket.s3_data_bucket.id}"
policy =<<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-data/dev-auth-elb/AWSLogs/your-account-id/*",
"Principal": {
"AWS": ["principal_id_from_table_above"]
}
}
]
}
EOF
}