How to create an AWS SSM Document Package using Terraform - amazon-web-services

Using Terraform, I am trying to create an AWS SSM Document Package for Chrome so I can install it on various EC2 instances I have. I define these steps via terraform:
Upload zip containing Chrome installer plus install and uninstall powershell scripts.
Add that ZIP to an SSM package.
However, when I execute terraform apply I receive the following error...
Error updating SSM document: InvalidParameterValueException:
AttachmentSource not provided in the input request.
status code: 400, request id: 8d89da70-64de-4edb-95cd-b5f52207794c
The contents of my main.tf are as follows:
# 1. Add package zip to s3
resource "aws_s3_bucket_object" "windows_chrome_executable" {
bucket = "mybucket"
key = "ssm_document_packages/GoogleChromeStandaloneEnterprise64.msi.zip"
source = "./software-packages/GoogleChromeStandaloneEnterprise64.msi.zip"
etag = md5("./software-packages/GoogleChromeStandaloneEnterprise64.msi.zip")
}
# 2. Create AWS SSM Document Package using zip.
resource "aws_ssm_document" "ssm_document_package_windows_chrome" {
name = "windows_chrome"
document_type = "Package"
attachments_source {
key = "SourceUrl"
values = ["/path/to/mybucket"]
}
content = <<DOC
{
"schemaVersion": "2.0",
"version": "1.0.0",
"packages": {
"windows": {
"_any": {
"x86_64": {
"file": "GoogleChromeStandaloneEnterprise64.msi.zip"
}
}
}
},
"files": {
"GoogleChromeStandaloneEnterprise64.msi.zip": {
"checksums": {
"sha256": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
}
}
DOC
}
If I change the file from a zip to a vanilla msi I do not receive the error message, however, when I navigate to the package in the AWS console it tells me that the install.ps1 and uninstall.ps1 files are missing (since obviously they weren't included).
Has anyone experienced the above error and do you know how to resolve it? Or does anyone have reference to a detailed example of how to do this?
Thank you.

I ran into this same problem, in order to fix it I added a trailing slash to the source url value parameter:
attachments_source {
key = "SourceUrl"
values = ["/path/to/mybucket/"]
}
My best guess is it appends the filename from the package spec to the value provided in the attachments source value so it needs the trailing slash to build a valid path to the actual file.

This is the way it should be set up for an attachment in s3:
attachments_source {
key = "S3FileUrl"
values = ["s3://packer-bucket/packer_1.7.0_linux_amd64.zip"]
name = "packer_1.7.0_linux_amd64.zip"
}

I realized that in the above example there was no way terraform could identify a dependency between the two resources i.e. the s3 object needs to be created before the aws_ssm_document. Thus, I added in the following explicit dependency inside the aws_ssm_document:
depends_on = [
aws_s3_bucket_object.windows_chrome_executable
]

Related

AWS Macie & Terraform - Select all S3 buckets in account

I am enabling AWS Macie 2 using terraform and I am defining a default classification job as following:
resource "aws_macie2_account" "member" {}
resource "aws_macie2_classification_job" "member" {
job_type = "ONE_TIME"
name = "S3 PHI Discovery default"
s3_job_definition {
bucket_definitions {
account_id = var.account_id
buckets = ["S3 BUCKET NAME 1", "S3 BUCKET NAME 2"]
}
}
depends_on = [aws_macie2_account.member]
}
AWS Macie needs a list of S3 buckets to analyze. I am wondering if there is a way to select all buckets in an account, using a wildcard or some other method. Our production accounts contain hundreds of S3 buckets and hard-coding each value in the s3_job_definition is not feasible.
Any ideas?
The Terraform AWS provider does not support a data source for listing S3 buckets at this time, unfortunately. For things like this (data sources that Terraform doesn't support), the common approach is to use the AWS CLI through an external data source.
These are modules that I like to use for CLI/shell commands:
As a data source (re-runs each time)
As a resource (re-runs only on resource recreate or on a change to a trigger)
Using the data source version, it would look something like:
module "list_buckets" {
source = "Invicton-Labs/shell-data/external"
version = "0.1.6"
// Since the command is the same on both Unix and Windows, it's ok to just
// specify one and not use the `command_windows` input arg
command_unix = "aws s3api list-buckets --output json"
// You want Terraform to fail if it can't get the list of buckets for some reason
fail_on_error = true
// Specify your AWS credentials as environment variables
environment = {
AWS_PROFILE = "myprofilename"
// Alternatively, although not recommended:
// AWS_ACCESS_KEY_ID = "..."
// AWS_SECRET_ACCESS_KEY = "..."
}
}
output "buckets" {
// We specified JSON format for the output, so decode it to get a list
value = jsondecode(module.list_buckets.stdout).Buckets
}
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
buckets = [
{
"CreationDate" = "2021-07-15T18:10:20+00:00"
"Name" = "bucket-foo"
},
{
"CreationDate" = "2021-07-15T18:11:10+00:00"
"Name" = "bucket-bar"
},
]

How to send lifecycle_rules to a s3 module in terraform

I have a terraform module that creates a s3 bucket. I want the module to be able to accept lifecycle rules.
resource "aws_s3_bucket" "somebucket" {
bucket = "my-versioning-bucket"
acl = "private"
lifecycle_rule {
prefix = "config/"
enabled = true
noncurrent_version_transition {
days = 30
storage_class = "STANDARD_IA"
}
}
}
I want to be able to to send above lifecycle_rule block of code when I call the module. I tried to send it through a variable but it did not work. I have done some research but no luck. Any help is highly appreciated.
Try to use output, from one module , get the desire value in output
e.g
output "lifecycle_rule" {
value = aws_s3_bucket.somebucket.id
}
and call this value into your another module
like:
module "somename" {
source = "/somewhere"
lifecycle_rule = module.amodule-name-where-output-is-applied.lifecycle_rule
...
You would need to play around this.
Just give it a try, these are my guess as far as I understand terraform and your questing.
below link can also help you:
Terraform: Output a field from a module

Terraform not uploading a new ZIP

I want to use Terraform for deployment of my lambda functions. I did something like:
provider "aws" {
region = "ap-southeast-1"
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
}
resource "aws_lambda_function" "test_terraform_function" {
filename = "build/lambdas.zip"
function_name = "test_terraform_function"
handler = "test.handler"
runtime = "nodejs8.10"
role = "arn:aws:iam::000000000:role/xxx-lambda-basic"
memory_size = 128
timeout = 5
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
tags = {
"Cost Center" = "Consulting"
Developer = "Jiew Meng"
}
}
I find that when there is no change to test.js, terraform correctly detects no change
No changes. Infrastructure is up-to-date.
When I do change the test.js file, terraform does detect a change:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_lambda_function.test_terraform_function
last_modified: "2018-12-20T07:47:16.888+0000" => <computed>
source_code_hash: "KpnhsytFF0yul6iESDCXiD2jl/LI9dv56SIJnwEi/hY=" => "JWIYsT8SszUjKEe1aVDY/ZWBVfrZYhhb1GrJL26rYdI="
It does zip up the new zip, however, it does not seem to update the function with the new ZIP. It seems like it thinks since the filename has no change, it does not upload ... How can I fix this behaviour?
=====
Following some of the answers here, I tried:
Using null_resource
Using S3 bucket/object with etag
And it does not update ... Why is that?
I ran into the same issue and what solved it for me was publishing the Lambda functions automatically using the publish argument. To do so simply set publish = true in your aws_lambda_function resource.
Note that your function will be versioned after this and each change will create a new one. Therefor you should make sure that you use the qualified_arn attribute reference if you're referring to the function in any of your other Terraform code.
There is a workaround to trigger the resource to be refreshed, if the target lambda file names are src/main.py and src/handler.py. If you have more files to be managed, add them one by one.
resource "null_resource" "lambda" {
triggers {
main = "${base64sha256(file("src/main.py"))}"
handler = "${base64sha256(file("src/handler.py"))}"
}
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
depends_on = ["null_resource.lambda"]
}
Let me know if this works for you.
There is 2 things you need to take care of:
upload zip file to S3 if its content has changed
update Lambda function if zip file content has changed
I can see you are taking care of the latter with source_code_hash. I don't see how you handle the former. It could look like that:
resource "aws_s3_bucket_object" "zip" {
bucket = "${aws_s3_bucket.zip.bucket}"
key = "myzip.zip"
source = "${path.module}/myzip.zip"
etag = "${md5(file("${path.module}/myzip.zip"))}"
}
etag is the most important option here.
I created this module to help ease some of the issues around deploying Lambda with Terraform: https://registry.terraform.io/modules/rojopolis/lambda-python-archive/aws/0.1.4
It may be useful in this scenario. Basically, it replaces the "archive_file" data source with a specialized lambda archive data source to better manage stable source code hash, etc.

Terraform load json object from AWS S3

I have a need to load data from a non public S3 bucket. Using this JSON I wanted be able to loop over lists within the terraform.
Example:
{
info: [
"10.0.0.0/24",
"10.1.1.0/24",
"10.2.2.0/24"
]
}
I can retrieve the JSON fine using the following:
data "aws_s3_bucket_object" "config" {
bucket = "our-bucket"
key = "global.json"
}
What I cannot do is utilize this as a map|list within terraform so that I can utilize this data. Any ideas?
After a good deal of trial and error I figured out a solution. Note that for this to work it appears the JSON source needs to be simple, by that I mean no nested objects like lists or maps.
{
foo1: "my foo1",
foo2: "my foo2",
foo3: "my foo3"
}
data "aws_s3_bucket_object" "config-json" {
bucket = "my-bucket"
key = "foo.json"
}
data "external" "config-map" {
program = ["echo", "${data.aws_s3_bucket_object.config-json.body}"]
}
output "foo" {
value = ["${values(data.external.config-map.result)}"]
}

Terraform overwriting state file on remote backend

Most probably I am doing something wrong or missing something here.
This is how my terraform template looks like:
locals {
credentials_file_path = "~/gcp-auth/account.json"
}
terraform {
backend "gcs" {
bucket = "somebucket-tf-state"
prefix = "terraform/state/"
credentials = "~/gcp-auth/account.json"
}
}
provider "google" {
region = "${var.region}"
credentials = "${file(local.credentials_file_path)}"
}
module "project" {
source = "../modules/gcp-project/"
project_name = "${var.project_name}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
}
When I run this for multiple times with different parameters, It overwrites the previous state file.
This is what I see in the bucket:
Buckets/somebucket-tf-state/terraform/state/default.tfstate
Is there a way I can create different state files per project I run the template for?
If I understand what you're trying to do correctly, then it sounds like what you need is workspaces.
Just do :
# Select per-project workspace or create new workspace
terraform workspace select $GCE_PROJECT || terraform workspace new $GCE_PROJECT
$ Plan and apply as usual.
terraform plan -out .terraform/.terraform.plan && terraform apply .terraform/.terraform.plan
# Revert to default workspace
terraform workspace select default
The better oprion is to use GitOps. You should create an environment for each branch and for every environment inject the correct value in the bucket name.