aws_storagegateway_gateway SMB share File/directory access controlled by option with terraform - amazon-web-services

I am unsing below terraform code to create aws file storage gateway. I am able to create the SMB file share but the settings for File/directory access controlled by : POSIX permissions But I wanted to update this with Windows Access Control Lists.
Try to check https://www.terraform.io/docs/providers/aws/r/storagegateway_smb_file_share.html but didnt find the right argument to implement this.
Can someone help me on this.
resource "aws_storagegateway_gateway" "gateway" {
gateway_ip_address = var.gateway_ip_address
gateway_name = var.gateway_name
gateway_timezone = var.gateway_timezone
gateway_type = "FILE_S3"
smb_active_directory_settings {
domain_name = var.domain
username = var.domain_username
password = var.domain_password
}
}
resource "aws_storagegateway_smb_file_share" "storage_gw" {
authentication = "ActiveDirectory"
gateway_arn = aws_storagegateway_gateway.gateway.arn
default_storage_class = "S3_STANDARD_IA"
location_arn = aws_s3_bucket.bucket.arn
role_arn = aws_iam_role.gateway.arn
valid_user_list = ["#application_group"]
}
Thanks in advance

I could able to solve this issue with below cli command
aws storagegateway update-smb-file-share --file-share-arn arn:aws:storagegateway:eu-central-1:xxxxxxxx:gateway/sgw-xxxxxx --smbacl-enabled

Related

How to enable bigtable api (read) on a GCP VM instance using terraform

I am using terraform to provision VMs in GCP and I want to enable bigtable api - read.
I am using the scope list below but I cannot find the scope for
service_account = {
dev = {
email = "6xxxxx4-compute#developer.gserviceaccount.com"
scopes = ["userinfo-email", "compute-ro", "storage-ro", "logging-write", "trace"]
}
...
}
The list in his link doesn't seem to be complete ..
You are the right the alias for
"https://www.googleapis.com/auth/bigtable.data.readonly" is missing.
As a workaround this will also work
service_account {
email = "xxxxxxxxx#xxxxxx.iam.gserviceaccount.com"
scopes = ["https://www.googleapis.com/auth/bigtable.data.readonly"]
}
For Google to update the page please go to feedback at the end of the url you linked and you can notify them there.

How to use secret manager in creating DMS target endpoint for RDS

How do we create a DMS endpoint for RDS using Terraform by providing the Secret Manager ARN to fetch the credentials? I looked at the documentation but I couldn't find anything.
There's currently an open feature request for DMS to natively use secrets manager to connect to your RDS instance. This has a linked pull request that initially adds support for PostgreSQL and Oracle RDS instances for now but is currently unreviewed so it's hard to know when that functionality may be released.
If you aren't using automatic secret rotation (or can rerun Terraform after the rotation) and don't mind the password being stored in the state file but still want to use the secrets stored in AWS Secrets Manager then you could have Terraform retrieve the secret from Secrets Manager at apply time and use that to configure the DMS endpoint using the username and password combination instead.
A basic example would look something like this:
data "aws_secretsmanager_secret" "example" {
name = "example"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
}
resource "aws_dms_endpoint" "example" {
certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
database_name = "test"
endpoint_id = "test-dms-endpoint-tf"
endpoint_type = "source"
engine_name = "aurora"
extra_connection_attributes = ""
kms_key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
password = jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["password"]
port = 3306
server_name = "test"
ssl_mode = "none"
tags = {
Name = "test"
}
username = "test"
}

Terraform - AWS IAM user with Programmatic access

I'm working with aws via terraform.
I'm trying to create an IAM user with Access type of "Programmatic access".
With the AWS management console this is quite simple:
When trying with Terraform (reference to docs) it seems that only the following arguments are supported:
name
path
permissions_boundary
force_destroy
tags
Maybe this should be configured via a policy?
Any help will be appreciated.
(*) Related question with different scenario.
You can use aws_iam_access_key (https://www.terraform.io/docs/providers/aws/r/iam_access_key.html) terraform resource to create Access keys for the user and that should imply that user has Programmatic Access.
Hope this helps.
The aws_iam_user resource needs to also have an aws_iam_access_key resource created for it.
The iam-user module has a comprehensive example of using it.
You could also use that module straight from the registry and let that do everything for you.
If you dont want to encrypt and just looking for Access key & Secret key into plain text you can use this
main.tf
resource "aws_iam_access_key" "sagemaker" {
user = aws_iam_user.user.name
}
resource "aws_iam_user" "user" {
name = "user-name"
path = "/"
}
data "aws_iam_policy" "sagemaker_policy" {
arn = "arn:aws:iam::aws:policy/AmazonSageMakerFullAccess"
}
resource "aws_iam_policy_attachment" "attach-policy" {
name = "sagemaker-policy-attachment"
users = [aws_iam_user.user.name]
policy_arn = data.aws_iam_policy.sagemaker_policy.arn
}
output.tf
output "secret_key" {
value = aws_iam_access_key.user.secret
}
output "access_key" {
value = aws_iam_access_key.user.id
}
you will get the Access key and secret key into the plain text you can directly use it.

Having trouble with Terraform and AWS Storage Gateway disks

I am using Terraform with AWS and have been able to create an AWS Storage Gateway file gateway using the aws_storagegateway_gateway resource.
The gateway will create and the status will be 'online' however there is not a cache disk added yet in the console which is normal as it has to be done after the gateway is created. The VM does have a disk and it is available to add in the console and doing so in the console works perfectly.
However, I am trying to add the disk with Terraform once the gateway is created and cannot seem to get the code to work, or quite possibly don't understand how to get it to work.
Trying to use the aws_storagegateway_cache resource but I get an error on the disk_id and do not know how to get it to return from the code of the gateway creation.
Might someone have a working example of how to get the cache disk to add with Terraform once the gateway is created or know how to get the disk_id so I can add it?
Adding code
provider "aws" {
access_key = "${var.access-key}"
secret_key = "${var.secret-key}"
token = "${var.token}"
region = "${var.region}"
}
resource "aws_storagegateway_gateway" "hmsgw" {
gateway_ip_address = "${var.gateway-ip-address}"
gateway_name = "${var.gateway-name}"
gateway_timezone = "${var.gateway-timezone}"
gateway_type = "${var.gateway-type}"
smb_active_directory_settings {
domain_name = "${var.domain-name}"
username = "${var.username}"
password = "${var.password}"
}
}
resource "aws_storagegateway_cache" "sgwdisk" {
disk_id = "SCSI"
gateway_arn = "${aws_storagegateway_gateway.hmsgw.arn}"
}
output "gatewayid" {
value = "${aws_storagegateway_gateway.hmsgw.arn}"
}
The error I get is:
aws_storagegateway_cache.sgwdisk: error adding Storage Gateway cache: InvalidGatewayRequestException: The specified disk does not exist.
status code: 400, request id: fda602fd-a47e-11e8-a1f4-b383e2e2e2f6
I have attempted to hard code the disk_id like above or use a variable. On the variable I don't know if it is returned or exists so that could be the issue, new to this.
Before creating the resource "aws_storagegateway_cache", use data to get the disk id. I am using the below scripts and it works fine.
variable "upload_disk_path" {
default = "/dev/sdb"
}
data "aws_storagegateway_local_disk" "upload_disk" {
disk_path = "${var.upload_disk_path}"
gateway_arn = "${aws_storagegateway_gateway.this.arn}"
}
resource "aws_storagegateway_upload_buffer" "stg_upload_buffer" {
disk_id = "${data.aws_storagegateway_local_disk.upload_disk.disk_id}"
gateway_arn = "${aws_storagegateway_gateway.this.arn}"
}
In case your are using two disk's (one for upload and one cahce), use the same code but set the default value of cache_disk_path = "/dev/sdc"
if you use the AWS cli to run: aws storagegateway list-local-disks --gateway-arn [your gateway's arn] --region [gateway's region], you'll get data returned that includes the disk ID.
Then, in your example code, you replace SCSI with "${gateway_arn}:[diskID from command above]" and your cache volume will be created.
One thing I've noticed though - when I've done this and then tried to apply the same Terraform code again, and in some cases even with a targeted deploy of a specific resource within my Terraform, it wants to re-deploy the cache volume, because the Terraform is detecting that the disk ID is changing to a value of "1". Passing in "1" as the value in the Terraform, however, does not seem to work.
This would work also:
variable "disk_path" {
default = "/dev/sdb"
}
provider "aws" {
alias = "primary"
access_key = var.access-key
secret_key = var.secret-key
token = var.token
region = var.region
}
resource "aws_storagegateway_gateway" "hmsgw" {
gateway_ip_address = var.gateway-ip-address
gateway_name = var.gateway-name
gateway_timezone = var.gateway-timezone
gateway_type = var.gateway-type
smb_active_directory_settings {
domain_name = var.domain-name
username = var.username
password = var.password
}
}
data "aws_storagegateway_local_disk" "sgw_disk" {
disk_path = var.disk_path
gateway_arn = aws_storagegateway_gateway.hmsgw.arn
provider = aws.primary
}
resource "aws_storagegateway_cache" "sgw_cache" {
disk_id = data.aws_storagegateway_local_disk.sgw_disk.id
gateway_arn = aws_storagegateway_gateway.hmsgw.arn
provider = aws.primary
}

Configure Postgres application users with Terraform for RDS

Terraform allows you to define Postgres master user and password with the options username and password. But there is no option to set up an application postgres user, how would you do that?
The AWS RDS resource is only used for creating/updating/deleting the RDS resource itself using the AWS APIs.
To create users or databases on the RDS instance itself you'd either want to use another tool (such as psql - the official command line tool or a configuration management tool such as Ansible) or use Terraform's Postgresql provider.
Assuming you've already created your RDS instance you would then connect to the instance as the master user and then create the application user with something like this:
provider "postgresql" {
host = "postgres_server_ip1"
username = "postgres_user"
password = "postgres_password"
}
resource "postgresql_role" "application_role" {
name = "application"
login = true
password = "application-password"
encrypted = true
}
addition to #ydaetskcoR answer, here is the full example for RDS PostgreSQL;
provider "postgresql" {
scheme = "awspostgres"
host = "db.domain.name"
port = "5432"
username = "db_username"
password = "db_password"
superuser = false
}
resource "postgresql_role" "new_db_role" {
name = "new_db_role"
login = true
password = "db_password"
encrypted_password = true
}
resource "postgresql_database" "new_db" {
name = "new_db"
owner = postgresql_role.new_db_role.name
template = "template0"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
The above two answers requires the host that runs the terraform has direct access to the RDS database, and usually you do not. I propose to code what you need to do in a lambda function (optionally with secrets manager for retrieving the master password):
resource "aws_lambda_function" "terraform_lambda_func" {
filename = "${path.module}/lambda_function/lambda_function.zip"
...
}
and then use the following data source (example) to call the lambda function.
data "aws_lambda_invocation" "create_app_user" {
function_name = aws_lambda_function.terraform_lambda_func.function_name
input = <<-JSON
{
"step": "create_app_user"
}
JSON
depends_on = [aws_lambda_function.terraform_lambda_func]
provider = aws.primary
}
This solution id generic. It can do what a lambda function can do with AWS API can do, which is basically limitless.