AWS - Looping over each region - amazon-web-services

I'm trying to loop into all aws region but i got this error message below :
any idea how to fix this ?
Get-EC2SecurityGroup : AWS was not able to validate the provided access credentials
At line:9 char:17
+ $EC2GroupList = Get-EC2SecurityGroup -Region $region | Select-Object ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (Amazon.PowerShe...rityGroupCmdlet:GetEC2SecurityGroupCmdlet) [Get-EC2SecurityGroup], InvalidOperationException
+ FullyQualifiedErrorId : Amazon.EC2.AmazonEC2Exception,Amazon.PowerShell.Cmdlets.EC2.GetEC2SecurityGroupCmdlet
Script:
$Profilelist = "AwsNewProfile"
foreach($credential in $Profilelist) {
Set-AWSCredential -ProfileName $credential
$regionlist = Get-AWSRegion | select -expandproperty Region
foreach($region in $regionlist) {
$EC2GroupList = Get-EC2SecurityGroup -Region $region | Select-Object Description, GroupId, GroupName, IpPermissions, IpPermissionsEgress, OwnerId, Tags, VpcId, #{Name='Region';expression={$region}}
}
}

Related

Terraform Cloud apply lambda function fails with ValidationException, AWS CLI lambda create-function with same parameters succeeds

When trying to run a Terraform apply in Terraform Cloud which attempts to create an AWS Lambda function resource, the apply fails with a nondescript ValidationException. No other error is returned. There is an issue in terraform-provider-aws addressing this problem.
This is the Terraform code describing the function:
module "lambda" {
source = "terraform-aws-modules/lambda/aws"
version = "~> 4.7"
function_name = "${module.this.s3_bucket_id}-to-cloudwatch"
handler = "index.handler"
runtime = "nodejs12.x"
timeout = 60
create_package = false
local_existing_package = "${path.module}/assets/code.zip"
environment_variables = {
LOG_GROUP_NAME = aws_cloudwatch_log_group.log_group.name
LOAD_BALANCER_TYPE = var.load_balancer_type
}
allowed_triggers = {
S3EventPermission = {
principal = "s3.amazonaws.com"
source_arn = module.this.s3_bucket_arn
}
}
role_path = "/tf-managed/"
policy_path = "/tf-managed/"
attach_cloudwatch_logs_policy = true
attach_tracing_policy = true
tracing_mode = "active"
attach_policy_statements = true
policy_statements = {
describe_log_groups = {
effect = "Allow"
actions = ["logs:DescribeLogGroups"]
resources = ["*"]
}
create_logs = {
effect = "Allow"
actions = [
"logs:DescribeLogStreams",
"logs:CreateLogStream",
"logs:PutLogEvents",
]
resources = [aws_cloudwatch_log_group.log_group.arn]
}
get_logs = {
effect = "Allow"
actions = ["s3:GetObject"]
resources = ["${module.this.s3_bucket_arn}/*"]
}
}
}
This is the output of terraform plan for the function:
# module.cluster_nlb.module.log_bucket.module.lambda.aws_lambda_function.this[0] will be created
+ resource "aws_lambda_function" "this" {
+ architectures = (known after apply)
+ arn = (known after apply)
+ filename = "../../../modules/lb-log-bucket-with-cloudwatch-export/assets/code.zip"
+ function_name = "nlb-access-logs-04916534-to-cloudwatch"
+ handler = "index.handler"
+ id = (known after apply)
+ invoke_arn = (known after apply)
+ last_modified = (known after apply)
+ memory_size = 128
+ package_type = "Zip"
+ publish = false
+ qualified_arn = (known after apply)
+ reserved_concurrent_executions = -1
+ role = "arn:aws:iam::585685634436:role/tf-managed/nlb-access-logs-04916534-to-cloudwatch"
+ runtime = "nodejs12.x"
+ signing_job_arn = (known after apply)
+ signing_profile_version_arn = (known after apply)
+ source_code_hash = "/pwL7Szm/wc/8dP8/Relzc8vy7nkAUQm9jtvgfWJa5c="
+ source_code_size = (known after apply)
+ tags_all = (known after apply)
+ timeout = 60
+ version = (known after apply)
+ environment {
+ variables = {
+ "LOAD_BALANCER_TYPE" = "network"
+ "LOG_GROUP_NAME" = "/aws/elb/network"
}
}
+ ephemeral_storage {
+ size = 512
}
+ tracing_config {
+ mode = "active"
}
}
The error as displayed in Terraform Cloud:
Error: error creating Lambda Function (1): ValidationException: status code: 400, request id: [...]
with module.cluster_alb.module.log_bucket.module.lambda.aws_lambda_function.this[0]
on .terraform/modules/cluster_alb.log_bucket.lambda/main.tf line 24, in resource "aws_lambda_function" "this":
resource "aws_lambda_function" "this" {
I've been trying to get a more detailed error by replicating the planned apply in an AWS CLI lambda create-function command. The command completes and successfully creates the Lambda function however.
This is the AWS CLI command:
aws lambda create-function \
--zip-file fileb://../../../modules/lb-log-bucket-with-cloudwatch-export/assets/code.zip \
--function-name 'nlb-access-logs-04916534-to-cloudwatch' \
--handler 'index.handler' \
--memory-size '128' \
--package-type 'Zip' \
--no-publish \
--role 'arn:aws:iam::585685634436:role/tf-managed/nlb-access-logs-04916534-to-cloudwatch' \
--runtime 'nodejs12.x' \
--timeout '60' \
--environment 'Variables={LOG_GROUP_NAME=/aws/elb/network,LOAD_BALANCER_TYPE=network}' \
--tracing-config 'Mode=Active' \
--description '' \
--debug
I have not been able to identify any discrepancies between the AWS CLI command, or why the validation would fail in Terraform.
I had set tracing_mode = "active" in the Terraform configuration, but passed --tracing-config 'Mode=Active' to the AWS CLI.
Valid values for tracing_mode are "PassThrough" and "Active". Note that the word "Active" must be capitalized.

Invoke RestMethod shows no error but does not upload file

I have a powershell script that I am running through ExtendScript (Photoshop) in order to upload files to an s3 bucket.
I have this code to upload files to an aws s3 bucket.
However, it only works for smaller files (works on a 50mb file)
But does not work on a 140mb file. Shows no error but the file is not uploaded.
Any ideas?
$_rawfilename = 'C:/Users/DELL/AppData/Local/Temp/Filled_Albedo.exr'
$folder = 'seam-removal'
$filename = 'Filled_Albedo.exr'
$keyFile = ($folder+ '/' + $filename)
$service = 's3'
$bucket = '**'
$region = 'us-west-2'
$host1 = $bucket + '.s3' + '.amazonaws.com'
$access_key = '**'
$secret_key = '**'
$br = [regex]::Unescape('\u000a')
function HmacSHA256($message, $secret) {
$hmacsha = New-Object System.Security.Cryptography.HMACSHA256
$hmacsha.key = $secret
$signature = $hmacsha.ComputeHash([Text.Encoding]::ASCII.GetBytes($message))
return $signature
}
function getSignatureKey($key, $dateStamp, $regionName, $serviceName) {
$kSecret = [Text.Encoding]::UTF8.GetBytes(('AWS4' + $key).toCharArray())
$kDate = HmacSHA256 $dateStamp $kSecret
$kRegion = HmacSHA256 $regionName $kDate
$kService = HmacSHA256 $serviceName $kRegion
$kSigning = HmacSHA256 'aws4_request' $kService
return $kSigning
}
function hash($request) {
$hasher = [System.Security.Cryptography.SHA256]::Create()
$content = [Text.Encoding]::UTF8.GetBytes($request)
$bytes = $hasher.ComputeHash($content)
return ($bytes | ForEach-Object ToString x2) -join ''
}
function requestBuilder($method, $key) {
$now = [DateTime]::UtcNow
$amz_date = $now.ToString('yyyyMMddTHHmmssZ')
$datestamp = $now.ToString('yyyyMMdd')
$signed_headers = 'host'
$credential_scope = $datestamp + '/' + $region + '/' + $service + '/' + 'aws4_request'
$canonical_querystring = ''
$canonical_querystring = 'X-Amz-Algorithm=AWS4-HMAC-SHA256'
$canonical_querystring += '&X-Amz-Credential=' + [uri]::EscapeDataString(($access_key + '/' + $credential_scope))
$canonical_querystring += '&X-Amz-Date=' + $amz_date
$canonical_querystring += '&X-Amz-Expires=86400'
$canonical_querystring += '&X-Amz-SignedHeaders=' + $signed_headers
$canonical_headers = 'host:' + $host1 + $br
$canonical_request = $method + $br
$canonical_request += '/' + $key + $br
$canonical_request += $canonical_querystring + $br
$canonical_request += $canonical_headers + $br
$canonical_request += $signed_headers + $br
$canonical_request += 'UNSIGNED-PAYLOAD'
$algorithm = 'AWS4-HMAC-SHA256'
$canonical_request_hash = hash -request $canonical_request
$string_to_sign = $algorithm + $br
$string_to_sign += $amz_date + $br
$string_to_sign += $credential_scope + $br
$string_to_sign += $canonical_request_hash
$signing_key = getSignatureKey $secret_key $datestamp $region $service
$signature = HmacSHA256 -secret $signing_key -message $string_to_sign
$signature = ($signature|ForEach-Object ToString x2) -join ''
$canonical_querystring += '&X-Amz-Signature=' + $signature
$request_url = 'http://' + $host1 + '/' + $key + '?' + $canonical_querystring
Write-Host $request_url
return $request_url
}
#C# class to create callback
$code = #"
public class SSLHandler
{
public static System.Net.Security.RemoteCertificateValidationCallback GetSSLHandler()
{
return new System.Net.Security.RemoteCertificateValidationCallback((sender, certificate, chain, policyErrors) => { return true; });
}
}
"#
#compile the class
Add-Type -TypeDefinition $code
#disable checks using new class
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = [SSLHandler]::GetSSLHandler()
#do the request
try
{
Invoke-RestMethod -Method PUT -Uri (requestBuilder 'PUT' $keyFile) -InFile $_rawfilename
} catch {
# do something
} finally {
#enable checks again
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {true}
}
Started working after making the changes suggested in the above comments (verbose and write host).

Terraform (0.12.29) import not working as expected; import succeeded but plan shows destroy & recreate

Some Background:
We have terraform code to create various AWS resources. Some of these resources are created per AWS account and hence are structured to be stored in a account-scope folder in our project. This was when we were only having one AWS region. Now our application is made multi-region and hence these resources are to be created per region for each AWS account.
In order to do that we have now moved these TF scripts to region-scope folder which will be run per region. Since these resources are no longer part of 'account scope' we have removed them from the account scope Terraform state.
Now when I try to import these resources
Imported the resources by running this from xyz-region-scope directory:
terraform import -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color <RESOURCE_NAME> <RESOURCE_ID>
One of the examples of a resource is:
RESOURCE_NAME=module.buckets.aws_s3_bucket.cloudtrail_logging_bucket
RESOURCE_ID="ab-xyz-stage-cloudtrail-logging-72a2c5cd"
I was expecting the imports would update the resources in the terraform state file on my local machine but the terraform state file created under xyz-region-scope/state/xyz-stage/terraform.tfstate is not updated.
Verified the Imports with:
terraform show
Run terraform plan:
terraform plan -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color
But the terraform plan output shows Plan: 6 to add, 0 to change, 5 to destroy. that is those resources will be destroyed and recreated.
I am not clear why so, am I missing something and not doing it right?
Please note we store the remote state in S3 bucket but I currently do not have the remote TF state file created in S3 bucket for region scope (I do have one for account scope though). I was expecting that the Import..Plan..Apply process will create one for region scope as well.
EDIT: I see the remote TF state file created in the S3 for region scope after running imports. One difference that I see between this new region-scope tf state file from old account-scope one is: the new file does not have any "depends_on" block under any of the resources resources[] > instances[]
Environment:
Local machine: macOS v10.14.6
Terraform v0.12.29
+ provider.aws v3.14.1
+ provider.null v2.1.2
+ provider.random v2.3.1
+ provider.template v2.1.2
EDIT 2:
Here are my Imports and terraform plan:
terraform import module.buckets.random_id.cloudtrail_bucket_suffix cqLFzQ
terraform import module.buckets.aws_s3_bucket.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import module.buckets.module.access_logging_bucket.aws_s3_bucket.default "ab-xyz-stage-access-logging-9d8e94ff"
terraform import module.buckets.module.access_logging_bucket.random_id.bucket_suffix nY6U_w
terraform import module.encryption.module.data_key.aws_iam_policy.decrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt"
terraform import module.encryption.module.data_key.aws_iam_policy.encrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt"
mymachine:xyz-region-scope kuldeepjain$ ../scripts/terraform.sh xyz-stage plan -no-color
+ set -o posix
+ IFS='
'
++ blhome
+ BASH_LIB_HOME=/usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT
+ source /usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT/s3/bucket.sh
+ main xyz-stage plan -no-color
+ '[' 3 -lt 2 ']'
+ local env=xyz-stage
+ shift
+ local command=plan
+ shift
++ get_region xyz-stage
++ local env=xyz-stage
++ shift
+++ aws --profile xyz-stage configure get region
++ local region=us-west-2
++ '[' -z us-west-2 ']'
++ echo us-west-2
+ local region=us-west-2
++ _get_bucket xyz-stage xyz-stage-tfstate
++ local env=xyz-stage
++ shift
++ local name=xyz-stage-tfstate
++ shift
+++ _get_bucket_list xyz-stage xyz-stage-tfstate
+++ local env=xyz-stage
+++ shift
+++ local name=xyz-stage-tfstate
+++ shift
+++ aws --profile xyz-stage --output json s3api list-buckets --query 'Buckets[?contains(Name, `xyz-stage-tfstate`) == `true`].Name'
++ local 'bucket_list=[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ _count_buckets_in_json '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ local 'json=[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ shift
+++ echo '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq '. | length'
++ local number_of_buckets=1
++ '[' 1 == 0 ']'
++ '[' 1 -gt 1 ']'
+++ echo '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq -r '.[0]'
++ local bucket_name=ab-xyz-stage-tfstate-5b8873b8
++ echo ab-xyz-stage-tfstate-5b8873b8
+ local tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8
++ get_config_file xyz-stage us-west-2
++ local env=xyz-stage
++ shift
++ local region=us-west-2
++ shift
++ local config_file=config/us-west-2/xyz-stage.tfvars
++ '[' '!' -f config/us-west-2/xyz-stage.tfvars ']'
++ config_file=config/us-west-2/default.tfvars
++ echo config/us-west-2/default.tfvars
+ local config_file=config/us-west-2/default.tfvars
+ export TF_DATA_DIR=state/xyz-stage/
+ TF_DATA_DIR=state/xyz-stage/
+ terraform get
+ terraform plan -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
module.encryption.module.data_key.data.null_data_source.key: Refreshing state...
module.buckets.data.template_file.dependencies: Refreshing state...
module.buckets.module.access_logging_bucket.data.template_file.dependencies: Refreshing state...
module.encryption.module.data_key.data.aws_region.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_caller_identity.current: Refreshing state...
data.aws_caller_identity.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_kms_alias.encryption_key_alias: Refreshing state...
module.buckets.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_kms_alias.default: Refreshing state...
module.buckets.module.access_logging_bucket.data.template_file.encryption_configuration: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.decrypt: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.encrypt: Refreshing state...
module.buckets.module.access_logging_bucket.random_id.bucket_suffix: Refreshing state... [id=nY6U_w]
module.encryption.module.data_key.aws_iam_policy.decrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt]
module.encryption.module.data_key.aws_iam_policy.encrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt]
module.buckets.module.access_logging_bucket.aws_s3_bucket.default: Refreshing state... [id=ab-xyz-stage-access-logging-9d8e94ff]
module.buckets.random_id.cloudtrail_bucket_suffix: Refreshing state... [id=cqLFzQ]
module.buckets.aws_s3_bucket.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]
module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail: Refreshing state...
module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement
<= read (data resources)
Terraform will perform the following actions:
# module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail will be read during apply
# (config refers to values not yet known)
<= data "aws_iam_policy_document" "restrict_access_cloudtrail" {
+ id = (known after apply)
+ json = (known after apply)
+ statement {
+ actions = [
+ "s3:GetBucketAcl",
]
+ effect = "Allow"
+ resources = [
+ (known after apply),
]
+ sid = "AWSCloudTrailAclCheck"
+ principals {
+ identifiers = [
+ "cloudtrail.amazonaws.com",
]
+ type = "Service"
}
}
+ statement {
+ actions = [
+ "s3:PutObject",
]
+ effect = "Allow"
+ resources = [
+ (known after apply),
]
+ sid = "AWSCloudTrailWrite"
+ condition {
+ test = "StringEquals"
+ values = [
+ "bucket-owner-full-control",
]
+ variable = "s3:x-amz-acl"
}
+ principals {
+ identifiers = [
+ "cloudtrail.amazonaws.com",
]
+ type = "Service"
}
}
}
# module.buckets.aws_s3_bucket.cloudtrail_logging_bucket must be replaced
-/+ resource "aws_s3_bucket" "cloudtrail_logging_bucket" {
+ acceleration_status = (known after apply)
+ acl = "private"
~ arn = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply)
~ bucket = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply) # forces replacement
~ bucket_domain_name = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.amazonaws.com" -> (known after apply)
~ bucket_regional_domain_name = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.us-west-2.amazonaws.com" -> (known after apply)
+ force_destroy = false
~ hosted_zone_id = "Z3BJ6K6RIION7M" -> (known after apply)
~ id = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply)
~ region = "us-west-2" -> (known after apply)
~ request_payer = "BucketOwner" -> (known after apply)
tags = {
"mycompany:finance:accountenvironment" = "xyz-stage"
"mycompany:finance:application" = "ab-platform"
"mycompany:finance:billablebusinessunit" = "my-dev"
"name" = "Cloudtrail logging bucket"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
~ lifecycle_rule {
- abort_incomplete_multipart_upload_days = 0 -> null
enabled = true
~ id = "intu-lifecycle-s3-int-tier" -> (known after apply)
- tags = {} -> null
transition {
days = 32
storage_class = "INTELLIGENT_TIERING"
}
}
- logging {
- target_bucket = "ab-xyz-stage-access-logging-9d8e94ff" -> null
- target_prefix = "logs/cloudtrail-logging/" -> null
}
+ logging {
+ target_bucket = (known after apply)
+ target_prefix = "logs/cloudtrail-logging/"
}
~ versioning {
~ enabled = false -> (known after apply)
~ mfa_delete = false -> (known after apply)
}
}
# module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket must be replaced
-/+ resource "aws_s3_bucket_policy" "cloudtrail_logging_bucket" {
~ bucket = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply) # forces replacement
~ id = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply)
~ policy = jsonencode(
{
- Statement = [
- {
- Action = "s3:GetBucketAcl"
- Effect = "Allow"
- Principal = {
- Service = "cloudtrail.amazonaws.com"
}
- Resource = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd"
- Sid = "AWSCloudTrailAclCheck"
},
- {
- Action = "s3:PutObject"
- Condition = {
- StringEquals = {
- s3:x-amz-acl = "bucket-owner-full-control"
}
}
- Effect = "Allow"
- Principal = {
- Service = "cloudtrail.amazonaws.com"
}
- Resource = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd/*"
- Sid = "AWSCloudTrailWrite"
},
]
- Version = "2012-10-17"
}
) -> (known after apply)
}
# module.buckets.random_id.cloudtrail_bucket_suffix must be replaced
-/+ resource "random_id" "cloudtrail_bucket_suffix" {
~ b64 = "cqLFzQ" -> (known after apply)
~ b64_std = "cqLFzQ==" -> (known after apply)
~ b64_url = "cqLFzQ" -> (known after apply)
byte_length = 4
~ dec = "1923270093" -> (known after apply)
~ hex = "72a2c5cd" -> (known after apply)
~ id = "cqLFzQ" -> (known after apply)
+ keepers = {
+ "aws_account_id" = "123412341234"
+ "env" = "xyz-stage"
} # forces replacement
}
# module.buckets.module.access_logging_bucket.aws_s3_bucket.default must be replaced
-/+ resource "aws_s3_bucket" "default" {
+ acceleration_status = (known after apply)
+ acl = "log-delivery-write"
~ arn = "arn:aws:s3:::ab-xyz-stage-access-logging-9d8e94ff" -> (known after apply)
~ bucket = "ab-xyz-stage-access-logging-9d8e94ff" -> (known after apply) # forces replacement
~ bucket_domain_name = "ab-xyz-stage-access-logging-9d8e94ff.s3.amazonaws.com" -> (known after apply)
~ bucket_regional_domain_name = "ab-xyz-stage-access-logging-9d8e94ff.s3.us-west-2.amazonaws.com" -> (known after apply)
+ force_destroy = false
~ hosted_zone_id = "Z3BJ6K6RIION7M" -> (known after apply)
~ id = "ab-xyz-stage-access-logging-9d8e94ff" -> (known after apply)
~ region = "us-west-2" -> (known after apply)
~ request_payer = "BucketOwner" -> (known after apply)
tags = {
"mycompany:finance:accountenvironment" = "xyz-stage"
"mycompany:finance:application" = "ab-platform"
"mycompany:finance:billablebusinessunit" = "my-dev"
"name" = "Access logging bucket"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
- grant {
- permissions = [
- "READ_ACP",
- "WRITE",
] -> null
- type = "Group" -> null
- uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" -> null
}
- grant {
- id = "0343271a8c2f184152c171b223945b22ceaf5be5c9b78cf167660600747b5ad8" -> null
- permissions = [
- "FULL_CONTROL",
] -> null
- type = "CanonicalUser" -> null
}
- lifecycle_rule {
- abort_incomplete_multipart_upload_days = 0 -> null
- enabled = true -> null
- id = "intu-lifecycle-s3-int-tier" -> null
- tags = {} -> null
- transition {
- days = 32 -> null
- storage_class = "INTELLIGENT_TIERING" -> null
}
}
+ logging {
+ target_bucket = (known after apply)
+ target_prefix = "logs/access-logging/"
}
~ versioning {
~ enabled = false -> (known after apply)
~ mfa_delete = false -> (known after apply)
}
}
# module.buckets.module.access_logging_bucket.random_id.bucket_suffix must be replaced
-/+ resource "random_id" "bucket_suffix" {
~ b64 = "nY6U_w" -> (known after apply)
~ b64_std = "nY6U/w==" -> (known after apply)
~ b64_url = "nY6U_w" -> (known after apply)
byte_length = 4
~ dec = "2643367167" -> (known after apply)
~ hex = "9d8e94ff" -> (known after apply)
~ id = "nY6U_w" -> (known after apply)
+ keepers = {
+ "aws_account_id" = "123412341234"
+ "env" = "xyz-stage"
} # forces replacement
}
Plan: 6 to add, 0 to change, 5 to destroy.
Snippet of Diff of my current remote TF state(LEFT) vs old account-scope(RIGHT) for cloudtrail_bucket_suffix:
The plan shows a difference in the name of the bucket (bucket forces replacement).
This triggers recreation of the bucket itself and dependent resources.
You need to get the bucket name to a stable state, then the rest will be stable as well. As you are using a random suffix for the bucket name, I suspect you forget to import this. The random_id resource allows imports like this:
terraform import module.buckets.random_id.cloudtrail_bucket_suffix cqLFzQ
Edit:
However, you will need to remove the keepers as they trigger the replacement of the random_id resource. keepers are used to trigger the recreation of dependent resources whenever other resources change.
I assume this is not what you want for your buckets as the keepers you defined seem to be stable/static: account_id and env are both unlikely to change for this deployment. If you really need them you can try to manipulate the state either manually.

How can I upload a file to S3 via Powershell with AES256 encryption without installing AWS SDK?

I want to upload a file to AWS S3 with AES256 encryption however I am not allowed to install the AWS CLI. I have the code below which allows me to upload files to S3 using my secret keys however this does not work then encryption is required for the S3 bucket. How do I perform something similar to: aws s3api put-object --server-side-encryption=AES256 --bucket=<bucket_name> --key=<name_of_object_when_uploaded> --body=/<path>/<object_to_upload> with my code below?
# Config Parts
$_rawfilename = 'C:/<NAME>/SSP00001_RITM1304145.csv'
$folder = 'TestResults'
$filename = $_rawfilename.Split('/')[2]
$keyFile = ($folder+ '/' + $filename)
$service = 's3'
$bucket = '<BUCKET NAME>'
$region = 'us-east-1'
$host1 = $bucket + '.s3' + '.amazonaws.com'
$access_key = ''
$secret_key = ''
$br = [regex]::Unescape('\u000a')
function HmacSHA256($message, $secret) {
$hmacsha = New-Object System.Security.Cryptography.HMACSHA256
$hmacsha.key = $secret
$signature = $hmacsha.ComputeHash([Text.Encoding]::ASCII.GetBytes($message))
return $signature
}
function getSignatureKey($key, $dateStamp, $regionName, $serviceName) {
$kSecret = [Text.Encoding]::UTF8.GetBytes(('AWS4' + $key).toCharArray())
$kDate = HmacSHA256 $dateStamp $kSecret
$kRegion = HmacSHA256 $regionName $kDate
$kService = HmacSHA256 $serviceName $kRegion
$kSigning = HmacSHA256 'aws4_request' $kService
return $kSigning
}
function hash($request) {
$hasher = [System.Security.Cryptography.SHA256]::Create()
$content = [Text.Encoding]::UTF8.GetBytes($request)
$bytes = $hasher.ComputeHash($content)
return ($bytes | ForEach-Object ToString x2) -join ''
}
function requestBuilder($method, $key) {
$now = [DateTime]::UtcNow
$amz_date = $now.ToString('yyyyMMddTHHmmssZ')
$datestamp = $now.ToString('yyyyMMdd')
$signed_headers = 'host'
$credential_scope = $datestamp + '/' + $region + '/' + $service + '/' + 'aws4_request'
$canonical_querystring = ''
$canonical_querystring = 'X-Amz-Algorithm=AWS4-HMAC-SHA256'
$canonical_querystring += '&X-Amz-Credential=' + [uri]::EscapeDataString(($access_key + '/' + $credential_scope))
$canonical_querystring += '&X-Amz-Date=' + $amz_date
$canonical_querystring += '&X-Amz-Expires=86400'
$canonical_querystring += '&X-Amz-SignedHeaders=' + $signed_headers
$canonical_headers = 'host:' + $host1 + $br
$canonical_request = $method + $br
$canonical_request += '/' + $key + $br
$canonical_request += $canonical_querystring + $br
$canonical_request += $canonical_headers + $br
$canonical_request += $signed_headers + $br
$canonical_request += 'UNSIGNED-PAYLOAD'
$algorithm = 'AWS4-HMAC-SHA256'
$canonical_request_hash = hash -request $canonical_request
$string_to_sign = $algorithm + $br
$string_to_sign += $amz_date + $br
$string_to_sign += $credential_scope + $br
$string_to_sign += $canonical_request_hash
$signing_key = getSignatureKey $secret_key $datestamp $region $service
$signature = HmacSHA256 -secret $signing_key -message $string_to_sign
$signature = ($signature|ForEach-Object ToString x2) -join ''
$canonical_querystring += '&X-Amz-Signature=' + $signature
$request_url = 'http://' + $host1 + '/' + $key + '?' + $canonical_querystring
Write-Host $request_url
return $request_url
}
# Where -InFile is Path/to/xlsx
Invoke-RestMethod -Method PUT -Uri (requestBuilder 'PUT' $keyFile) -InFile $_rawfilename
Start-Sleep -s 2
I tried adding $canonical_querystring += '&X-amz-server-side-encryption-customer-algorithm=AES256' to the code however it's still not working:
$canonical_querystring = ''
$canonical_querystring = 'X-Amz-Algorithm=AWS4-HMAC-SHA256'
$canonical_querystring += '&X-Amz-Credential=' + [uri]::EscapeDataString(($access_key + '/' + $credential_scope))
$canonical_querystring += '&X-Amz-Date=' + $amz_date
$canonical_querystring += '&X-Amz-Expires=86400'
**$canonical_querystring += '&X-amz-server-side-encryption-customer-algorithm=AES256'**
$canonical_querystring += '&X-Amz-SignedHeaders=' + $signed_headers
You have to add x-amz-server-side-encryption header to your request ($canonical_headers).
See:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/specifying-s3-encryption.html
https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html

How can I download a file from S3 via Powershell without installing AWS SDK?

I want to download a file from my AWS S3 bucket using Windows Powershell. I cannot install any AWS software and need to create an API to be able to access a file in AWS S3. I used Postman for testing that the file is accessible and it was successful.
Given this success I tried following AWS' guide which says that I need to create the following:
Create a canonical request.
Use the canonical request and additional metadata to create a string
for signing.
Derive a signing key from your AWS secret access key. Then use the
signing key, and the string from the previous step, to create a
signature.
Add the resulting signature to the HTTP request in a header or as a
query string parameter.
The closest I've seen is this example from https://forums.aws.amazon.com/thread.jspa?threadID=251722 by Abhaya however it is also unresolved. (The payload hash in this example is the payload hash for blank). I have gone through several AWS guides but these are very confusing when trying to apply them to powershell. https://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
The url the code below generates a url that looks correct:http://SAMPLEBUCKETNAME HERE.s3-ap-southeast-1.amazonaws.com/test.xlsx?&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=%2F20190907%2Fap-southeast-1%2Fs3%2Faws4_request&X-Amz-Date=20190907T1644136560000Z&X-Amz-E0&X-Amz-SignedHeaders=host&X-Amz-Signature=HASH HERE
$method = 'GET'
$service = 's3'
$host1 = 'SAMPLES3BUCKETNAME.s3-ap-southeast-1.amazonaws.com'
$region = 'ap-southeast-1'
$endpoint = 'http://SAMPLES3BUCKETNAME.s3-ap-southeast-1.amazonaws.com/test.xlsx'
function HmacSHA256($message, $secret){
<#$hmacsha = New-Object System.Security.Cryptography.HMACSHA256
$hmacsha.key = [Text.Encoding]::UTF8.GetBytes($secret)
#$hmacsha.key = $secret
$signature = $hmacsha.ComputeHash([Text.Encoding]::UTF8.GetBytes($message))
$signature = [Convert]::ToBase64String($signature)
#>
$hmacsha = New-Object System.Security.Cryptography.HMACSHA256
$hmacsha.Key = #($secret -split '(?<=\G..)(?=.)'|ForEach-Object {[byte]::Parse($_,'HexNumber')})
$sign = [BitConverter]::ToString($hmacsha.ComputeHash([Text.Encoding]::UTF8.GetBytes($message))).Replace('-','').ToLower()
return $sign
}
function getSignatureKey($key, $dateStamp, $regionName, $serviceName)
{
$kSecret = [Text.Encoding]::UTF8.GetBytes(("AWS4" + $key).toCharArray())
$kDate = HmacSHA256 $dateStamp $kSecret;
$kRegion = HmacSHA256 $regionName $kDate ;
$kService = HmacSHA256 $serviceName $kRegion ;
$kSigning = HmacSHA256 "aws4_request" $kService ;
return $kSigning;
}
$access_key = 'SAMPLEACCESSKEY'
$secret_key = 'SAMPLESECRETKEY'
$amz_date = [DateTime]::UtcNow.ToString('yyyyMMddTHHmmssfffffffZ')
$datestamp = [DateTime]::UtcNow.ToString('yyyyMMdd')
$canonical_uri = '/'
$canonical_headers = 'host:' + $host1 + "`n"
$signed_headers = 'host'
$algorithm = 'AWS4-HMAC-SHA256'
$credential_scope = $datestamp + '/' + $region + '/' + $service + '/' + 'aws4_request'
$canonical_querystring = ''
$canonical_querystring += '&X-Amz-Algorithm=AWS4-HMAC-SHA256'
$canonical_querystring += '&X-Amz-Credential=' + [uri]::EscapeDataString(($access_key + '/' + $credential_scope))
$canonical_querystring += '&X-Amz-Date=' + $amz_date
$canonical_querystring += '&X-Amz-Expires=86400'
$canonical_querystring += '&X-Amz-SignedHeaders=' + $signed_headers
$payload_hash = 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'
$canonical_request1 = $method + "`n" +$canonical_uri + "`n" + $canonical_querystring + "`n" + $canonical_headers + "`n" + $signed_headers + "`n" + $payload_hash
Write-Host $canonical_request1
function hash($request) {
$hasher = [System.Security.Cryptography.SHA256]::Create()
$content = [Text.Encoding]::UTF8.GetBytes($request)
$hash = [System.Convert]::ToBase64String($hasher.ComputeHash($content))
return $hash
}
$canonical_request = hash -request $canonical_request1
$string_to_sign = $algorithm + "`n" + $amz_date + "`n" + $credential_scope + "`n" + $canonical_request
$signing_key = getSignatureKey $secret_key $datestamp $region $service
$signature = HmacSHA256 -secret $signing_key -message $string_to_sign
$canonical_querystring += '&X-Amz-Signature=' + $signature
$request_url = $endpoint + "?" + $canonical_querystring
$request_url
I get the following error when I try accessing the url.
There were a few errors, notably how you were computing the signature, building the timestamp, and the error that you were seeing is because the parameters weren't properly being passed along.
Here's a version that corrects those issues:
$method = 'GET'
$service = 's3'
$bucket = "SAMPLES3BUCKETNAME"
$key = 'test.xlsx'
$region = 'ap-southeast-1'
$host1 = $bucket + '.s3-' + $region + '.amazonaws.com'
$access_key = 'SAMPLEACCESSKEY'
$secret_key = 'SAMPLESECRETKEY'
function HmacSHA256($message, $secret)
{
$hmacsha = New-Object System.Security.Cryptography.HMACSHA256
$hmacsha.key = $secret
$signature = $hmacsha.ComputeHash([Text.Encoding]::ASCII.GetBytes($message))
return $signature
}
function getSignatureKey($key, $dateStamp, $regionName, $serviceName)
{
$kSecret = [Text.Encoding]::UTF8.GetBytes(("AWS4" + $key).toCharArray())
$kDate = HmacSHA256 $dateStamp $kSecret;
$kRegion = HmacSHA256 $regionName $kDate;
$kService = HmacSHA256 $serviceName $kRegion;
$kSigning = HmacSHA256 "aws4_request" $kService;
return $kSigning
}
function hash($request)
{
$hasher = [System.Security.Cryptography.SHA256]::Create()
$content = [Text.Encoding]::UTF8.GetBytes($request)
$bytes = $hasher.ComputeHash($content)
return ($bytes|ForEach-Object ToString x2) -join ''
}
$now = [DateTime]::UtcNow
$amz_date = $now.ToString('yyyyMMddTHHmmssZ')
$datestamp = $now.ToString('yyyyMMdd')
$signed_headers = 'host'
$credential_scope = $datestamp + '/' + $region + '/' + $service + '/' + 'aws4_request'
$canonical_querystring = 'X-Amz-Algorithm=AWS4-HMAC-SHA256'
$canonical_querystring += '&X-Amz-Credential=' + [uri]::EscapeDataString(($access_key + '/' + $credential_scope))
$canonical_querystring += '&X-Amz-Date=' + $amz_date
$canonical_querystring += '&X-Amz-Expires=86400'
$canonical_querystring += '&X-Amz-SignedHeaders=' + $signed_headers
$canonical_headers = 'host:' + $host1 + "`n"
$canonical_request = $method + "`n"
$canonical_request += "/" + $key + "`n"
$canonical_request += $canonical_querystring + "`n"
$canonical_request += $canonical_headers + "`n"
$canonical_request += $signed_headers + "`n"
$canonical_request += "UNSIGNED-PAYLOAD"
$algorithm = 'AWS4-HMAC-SHA256'
$canonical_request_hash = hash -request $canonical_request
$string_to_sign = $algorithm + "`n"
$string_to_sign += $amz_date + "`n"
$string_to_sign += $credential_scope + "`n"
$string_to_sign += $canonical_request_hash
$signing_key = getSignatureKey $secret_key $datestamp $region $service
$signature = HmacSHA256 -secret $signing_key -message $string_to_sign
$signature = ($signature|ForEach-Object ToString x2) -join ''
$canonical_querystring += '&X-Amz-Signature=' + $signature
$request_url = "http://" + $host1 + "/" + $key + "?" + $canonical_querystring
Write-Host $request_url