How to encrypt S3 bucket using Terraform - amazon-web-services

I am trying to create encrypted S3 bucket. After I execute terraform apply, it all looks good, but when I look at the bucket in the AWS Console, it's not encrypted. I am also aware of the previous question.
Here is my terraform version:
Terraform v0.11.13
+ provider.aws v2.2.0
Here is my tf file:
resource "aws_s3_bucket" "test-tf-enc" {
bucket = "test-tf-enc"
acl = "private"
tags {
Name = "test-tf-enc"
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
This is the output after I execute the command:
aws_s3_bucket.test-tf-enc: Creating...
acceleration_status: "" => "<computed>"
acl: "" => "private"
arn: "" => "<computed>"
bucket: "" => "test-tf-enc"
bucket_domain_name: "" => "<computed>"
bucket_regional_domain_name: "" => "<computed>"
force_destroy: "" => "false"
hosted_zone_id: "" => "<computed>"
region: "" => "<computed>"
request_payer: "" => "<computed>"
server_side_encryption_configuration.#: "" => "1"
server_side_encryption_configuration.0.rule.#: "" => "1"
server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.#: "" => "1"
server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.0.sse_algorithm: "" => "AES256"
tags.%: "" => "1"
tags.Name: "" => "test-tf-enc"
versioning.#: "" => "<computed>"
website_domain: "" => "<computed>"
website_endpoint: "" => "<computed>"
aws_s3_bucket.test-tf-enc: Still creating... (10s elapsed)
aws_s3_bucket.test-tf-enc: Creation complete after 10s (ID: test-tf-enc)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Works as expected.
Using different user without sufficient permissions to validate operation through UI in AWS Management Console resulted in the confusion. Insufficient permissions message in UI only visible after expanding the Encryption pane.
Use aws cli for troubleshooting to reduce the problem surface.

Related

why doesn't a target group attachment work like a volume attachment

When using count(3) on ec2_instance, aws_volume_attachment, and aws_lb_target_group_attachment, I can terminate a single ec2_instance, and on the next terraform apply it will create just the single missing instance, the single missing volume attachment, but it will attempt to delete and recreate all 3 target group attachments. This results in all 3 instances being briefly unhealthy in the target group, meaning all 3 get sent requests even though 2 of them are healthy and one isn't.
resource "aws_volume_attachment" "kafka_att" {
count = "${var.zookeeper-max}"
device_name = "/dev/sdh"
volume_id =
"${element(aws_ebs_volume.kafkaVolumes.*.id,count.index)}"
instance_id =
"${element(aws_instance.zookeeper.*.id,count.index)}"
depends_on =
["aws_instance.zookeeper","aws_ebs_volume.kafkaVolumes"]
lifecycle {
ignore_changes = ["aws_instance.zookeeper"]
}
}
resource "aws_lb_target_group_attachment" "schemaRegistryTgAttach" {
count = "${var.zookeeper-max}"
target_group_arn =
"${aws_alb_target_group.KafkaSchemaRegistryTG.arn}"
target_id = "${element(aws_instance.zookeeper.*.id,count.index)}"
depends_on = ["aws_instance.zookeeper"]
lifecycle {
ignore_changes = ["aws_instance.zookeeper"]
}
}
resource "aws_instance" "zookeeper" {
count = "${var.zookeeper-max}"
...
blah blah
}
So if I terminate instance 1 , I would expect the next terraform apply to re-create zookeeper[1], kafka_att[1], and schemaRegistryTgAttach[1].
The code above creates the instance[1] and the volume_attachment[1], but no target_group_attachments. If I remove the lifecycle block from the target_group_attachments, it then deletes and re-creates all 3 ??? How can I change it so that if creating a single ec2 instance, it only creates a single target group attachment ?
If I try using the same method as for the volume attachment...
resource "aws_lb_target_group_attachment" "schemaRegistryTgAttach" {
count = "${var.zookeeper-max}"
target_group_arn =
"${aws_alb_target_group.KafkaSchemaRegistryTG.arn}"
target_id = "${element(aws_instance.zookeeper.*.id,count.index)}"
depends_on = ["aws_instance.zookeeper"]
lifecycle {
ignore_changes = ["aws_instance.zookeeper"]
}
}
then it creates no TG attachments, but does create the correct volume attachment...the plan output is :-
+ aws_instance.zookeeper[0]
id: <computed>
ami: "ami-09693313102a30b2c"
arn: <computed>
associate_public_ip_address: "false"
availability_zone: <computed>
cpu_core_count: <computed>
cpu_threads_per_core: <computed>
credit_specification.#: "1"
credit_specification.0.cpu_credits: "unlimited"
disable_api_termination: "true"
ebs_block_device.#: <computed>
ebs_optimized: "false"
ephemeral_block_device.#: <computed>
get_password_data: "false"
host_id: <computed>
iam_instance_profile: "devl-ZOOKEEPER_IAM_PROFILE"
instance_state: <computed>
instance_type: "t3.small"
ipv6_address_count: <computed>
ipv6_addresses.#: <computed>
key_name: "devl-key"
monitoring: "false"
network_interface.#: <computed>
network_interface_id: <computed>
password_data: <computed>
placement_group: <computed>
primary_network_interface_id: <computed>
private_dns: <computed>
private_ip: <computed>
public_dns: <computed>
public_ip: <computed>
root_block_device.#: "1"
root_block_device.0.delete_on_termination: "true"
root_block_device.0.volume_id: <computed>
root_block_device.0.volume_size: "16"
root_block_device.0.volume_type: "gp2"
security_groups.#: <computed>
subnet_id: "subnet-5b8d8200"
tags.%: "3"
tags.Description: "Do not terminate more than
one at a time"
tags.Env: "devl"
tags.Name: "devl-zookeeper-0"
tenancy: <computed>
user_data:
"70fd2ae9f7da42e2fb15328cd6539c4f7ed4a5be"
volume_tags.%: <computed>
vpc_security_group_ids.#: "1"
vpc_security_group_ids.3423986071: "sg-03911aa28dbcb3f20"
+ aws_volume_attachment.kafka_att[0]
id: <computed>
device_name: "/dev/sdh"
instance_id:
"${element(aws_instance.zookeeper.*.id,count.index)}"
volume_id: "vol-021d1530117f31905"
if however I remove the lifecycle block on the target group attachment, it attempts to destroy and recreate all 3 target group attachments.
+ aws_instance.zookeeper[0]
id: ami: "ami-09693313102a30b2c"
arn:
associate_public_ip_address: "false"
availability_zone:
cpu_core_count:
cpu_threads_per_core:
credit_specification.#: "1"
credit_specification.0.cpu_credits: "unlimited"
disable_api_termination: "true"
ebs_block_device.#:
ebs_optimized: "false"
ephemeral_block_device.#:
get_password_data: "false"
host_id:
iam_instance_profile: "devl-ZOOKEEPER_IAM_PROFILE"
instance_state:
instance_type: "t3.small"
ipv6_address_count: ipv6_addresses.#:
key_name: "devl-key"
monitoring: "false"
network_interface.#:
network_interface_id:
password_data:
placement_group:
primary_network_interface_id:
private_dns: private_ip: public_dns:
public_ip:
root_block_device.#: "1"
root_block_device.0.delete_on_termination: "true"
root_block_device.0.volume_id:
root_block_device.0.volume_size: "16"
root_block_device.0.volume_type: "gp2"
security_groups.#:
source_dest_check: "false"
subnet_id: "subnet-5b8d8200"
tags.%: "3"
tags.Description: "Do not terminate more than one at a time"
tags.Env: "devl"
tags.Name: "devl-zookeeper-0"
tenancy:
user_data: "70fd2ae9f7da42e2fb15328cd6539c4f7ed4a5be"
volume_tags.%:
vpc_security_group_ids.#: "1"
vpc_security_group_ids.3423986071: "sg-03911aa28dbcb3f20"
-/+ aws_lb_target_group_attachment.SchemaRegistryTgAttach[0] (new
resource required)
id: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034-20190218210336558900000001" => (forces new resource)
port: "8081" => "8081"
target_group_arn: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034" => "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034" target_id: "i-03ed28ab175c0f684" => "${element(aws_volume_attachment.kafka_att.*.instance_id,count.index)}" (forces new resource)
-/+ aws_lb_target_group_attachment.SchemaRegistryTgAttach[1] (new resource required) id: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034-20190218210336576900000002" => (forces new resource)
port: "8081" => "8081"
target_group_arn: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034" => "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034"
target_id: "i-0b39bd7244f32809f" => "${element(aws_volume_attachment.kafka_att.*.instance_id,count.index)}" (forces new resource)
-/+ aws_lb_target_group_attachment.SchemaRegistryTgAttach[2] (new resource required)
id: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034-20190218210336671000000003" => (forces new resource)
port: "8081" => "8081"
target_group_arn: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034" => "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034"
target_id: "i-0bbd8d3a10890b94c" => "${element(aws_volume_attachment.kafka_att.*.instance_id,count.index)}" (forces new resource)
+ aws_volume_attachment.kafka_att[0]
id:
device_name: "/dev/sdh"
instance_id: "${element(aws_instance.zookeeper.*.id,count.index)}"
volume_id: "vol-021d1530117f31905"
How can I make it behave like the volume attachment... so that if instance 3 has died, tf apply will create volume attachment 3 and only TG attachment 3.

Terraform fails in CI while working fine locally when creating S3 bucket and flow logs

I have written the following Terraform code:
data "template_file" "external-bucket-policy" {
template = "${file("${path.module}/policies/bucket-policy.tpl")}"
vars {
bucket-name = "${local.bucket_name}"
}
}
resource "aws_s3_bucket" "vpc_logs_recordsyes" {
bucket = "${local.bucket_name}"
acl = "private"
force_destroy = false
versioning {
enabled = true
}
policy = "${data.template_file.external-bucket-policy.rendered}"
}
then I want to create VPC flow logs:
resource "aws_flow_log" "example" {
log_destination = "arn:aws:s3:::${local.bucket_name}"
log_destination_type = "${var.log_destination_type}"
traffic_type = "${var.traffic_type}"
vpc_id = "${var.vpc_id}"
}
when running in CI I am getting the following:
aws_s3_bucket.vpc_logs_recordsyes: Creating...
acceleration_status: "" => "<computed>"
acl: "" => "private"
arn: "" => "<computed>"
bucket: "" => "xsight-logging-bucket-Dev-us-east-1"
bucket_domain_name: "" => "<computed>"
bucket_regional_domain_name: "" => "<computed>"
force_destroy: "" => "false"
hosted_zone_id: "" => "<computed>"
policy: "" => "{\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Sid\": \"\",\r\n \"Effect\": \"Deny\",\r\n \"Principal\": {\r\n \"AWS\": \"*\"\r\n },\r\n \"Action\": \"s3:DeleteBucket\",\r\n \"Resource\": \"arn:aws:s3:::xsight-logging-bucket-Dev-us-east-1\"\r\n },\r\n {\r\n \"Sid\": \"DenyIncorrectEncryptionHeader\",\r\n \"Effect\": \"Deny\",\r\n \"Principal\": \"*\",\r\n \"Action\": \"s3:PutObject\",\r\n \"Resource\": \"arn:aws:s3:::xsight-logging-bucket-Dev-us-east-1/*\",\r\n \"Condition\": {\r\n \"StringNotEquals\": {\r\n \"s3:x-amz-server-side-encryption\": \"AES256\"\r\n }\r\n }\r\n },\r\n {\r\n \"Sid\": \"DenyUnEncryptedObjectUploads\",\r\n \"Effect\": \"Deny\",\r\n \"Principal\": \"*\",\r\n \"Action\": \"s3:PutObject\",\r\n \"Resource\": \"arn:aws:s3:::xsight-logging-bucket-Dev-us-east-1/*\",\r\n \"Condition\": {\r\n \"Null\": {\r\n \"s3:x-amz-server-side-encryption\": true\r\n }\r\n }\r\n }\r\n ]\r\n}"
region: "" => "<computed>"
request_payer: "" => "<computed>"
versioning.#: "" => "1"
versioning.0.enabled: "" => "true"
versioning.0.mfa_delete: "" => "false"
website_domain: "" => "<computed>"
website_endpoint: "" => "<computed>"
aws_flow_log.example: Creating...
log_destination: "" => "arn:aws:s3:::xsight-logging-bucket-Dev-us-east-1"
log_destination_type: "" => "s3"
log_group_name: "" => "<computed>"
traffic_type: "" => "ALL"
vpc_id: "" => "vpc-3e2ab845"
Error: Error applying plan:
2 error(s) occurred:
* aws_s3_bucket.vpc_logs_recordsyes: 1 error(s) occurred:
* aws_s3_bucket.vpc_logs_recordsyes: Error creating S3 bucket: InvalidBucketName: The specified bucket is not valid.
status code: 400, request id: A2E94D42FF9CF218, host id: eD0zSCQ8+85kIIsctFeXcG4jLd4LDpeW0PRK01aq5JrWiW3qkyDKF76WeVKGgJVOcJT3gB2BBzk=
* aws_flow_log.example: 1 error(s) occurred:
* aws_flow_log.example: unexpected EOF
You are getting an error from the AWS call which means something not correct at AWS side (and no issue with Terraform code).
Ref: https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_s3_bucket.go#L583
Now, looking at the error, the important keyword is Error creating S3 bucket: InvalidBucketName.This suggests bucket name you have chosen does not comply all the Naming convention.
Referring to AWS documentation(https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html) suggests, a bucket name should NOT have Upper Case.
Can you update your Bucket name to xsight-logging-bucket-dev-us-east-1 and try?
Also, for vpc_flow logging, you don't need to append arn:aws:s3:::.
You can just define as log_destination = "${local.bucket_name}".
Ref: https://www.terraform.io/docs/providers/aws/r/flow_log.html
It looks like your bucket name isn't a valid S3 bucket name as mentioned in the AWS User Guide:
The following are the rules for naming S3 buckets in all AWS Regions:
Bucket names must be unique across all existing bucket names in Amazon S3.
Bucket names must comply with DNS naming conventions.
Bucket names must be at least 3 and no more than 63 characters long.
Bucket names must not contain uppercase characters or underscores.
Bucket names must start with a lowercase letter or number.
Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can
contain lowercase letters, numbers, and hyphens. Each label must
start and end with a lowercase letter or a number.
Bucket names must not be formatted as an IP address (for example,
192.168.5.4).
When you use virtual hosted–style buckets with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches buckets that don't
contain periods. To work around this, use HTTP or write your own
certificate verification logic. We recommend that you do not use
periods (".") in bucket names when using virtual hosted–style
buckets.
Specifically note the part that I have bolded that mentions that a bucket name must not contain uppercase characters while your plan shows that you are using an uppercase character in the S3 bucket name:
bucket: "" => "xsight-logging-bucket-Dev-us-east-1"
Terraform can normally catch these types of errors at plan time as the validation is known ahead of time. Unfortunately it must also be backwards compatible and before 1 March 2018 buckets in US-East-1 had a less restrictive naming scheme for buckets so it's not easy to validate this at plan time.
On top of this, your flow logs have a race condition because Terraform is trying to create the S3 bucket and the VPC flow log at the same time.
To give Terraform a hint as to the dependency order of resources you can interpolate the output of one into the parameters of another or use depends_on where this isn't possible.
In your case you should just refer to the S3 bucket resource in the VPC flow log resource:
resource "aws_flow_log" "example" {
log_destination = "${aws_s3_bucket.vpc_logs.bucket}"
log_destination_type = "${var.log_destination_type}"
traffic_type = "${var.traffic_type}"
vpc_id = "${var.vpc_id}"
}

Ruby AWS SDK (v2/v3) Tag spot instances

According to this link it is possible to tag spot fleet instances. Tags are automatically propagated to the launched instances. Is it possible to do the same for normal spot instances? My approach so far
ec2 = Aws::EC2::Resource.new({region: region, credentials: creds})
launch_specification ={
:security_groups => ['ccc'],
:ebs_optimized => true,
:image_id => "image_id",
:instance_type => "type",
:key_name => "key",
:placement => {:group_name => "ggg"},
:user_data => ""
}
resp = ec2.client.request_spot_instances(:instance_count => count,
:launch_specification => launch_specification,
:spot_price => price.to_s,
:type => 'one-time',
:dry_run => false
)
resp.spot_instance_requests.each do |sir|
ec2.create_tags({
dry_run: false,
resources: [sir.spot_instance_request_id],
tags: [
{
key: "owner",
value: "ooo",
},
],
})
end
Tags are created for for the spot_instance_request, but are not propagated to the launched instances.

How to delete server certificate pointed to by load balancer?

I'm trying to upgrade some certificates on my terraform-managed infrastructure. The tf plan is to delete an aws_elb.load_balancer that is using the old aws_iam_server_certificate, and then create a new aws_iam_server_certificate, and point a new listener at it. But when the plan is applied, the listener is never deleted, so the old cert cannot be deleted.
I know the listener is not deleted because when I run terraform plan again I see that the old listener still needs to be destroyed.
How can I convince terraform to destroy this listener and the old cert, and create the new cert and listener, all by doing a simple terraform apply, that is, without manually calling terraform destroy?
plan
~ module.ecs.aws_ecs_service.service
task_definition: "arn:aws:ecs:us-east-1:12345:task-definition/dle-glossary-api-sandbox:20" => "${aws_ecs_task_definition.task_definition.arn}"
-/+ module.ecs.aws_ecs_task_definition.admin_task_definition
arn: "arn:aws:ecs:us-east-1:12345:task-definition/dle-glossary-api-sandbox-admin:20" => "<computed>"
container_definitions: "93b15fbec63f6cae8389cc6befa505890002ec4f" => "abf62f02c60dbfa30952def0eb69fec96b455205" (forces new resource)
family: "dle-glossary-api-sandbox-admin" => "dle-glossary-api-sandbox-admin"
network_mode: "" => "<computed>"
revision: "20" => "<computed>"
-/+ module.ecs.aws_ecs_task_definition.task_definition
arn: "arn:aws:ecs:us-east-1:12345:task-definition/dle-glossary-api-sandbox:20" => "<computed>"
container_definitions: "9e38e676174426b7c8179446f788d7eeffa90583" => "92fd7350f9798461d78f80bfca4fccea6cea68db" (forces new resource)
family: "dle-glossary-api-sandbox" => "dle-glossary-api-sandbox"
network_mode: "" => "<computed>"
revision: "20" => "<computed>"
~ module.ecs.aws_elb.load_balancer
needs to delete listener 2240553862, create a listener. This should free up the cert resource. Why can't we delete it?
listener.2240553862.instance_port: "80" => "0"
listener.2240553862.instance_protocol: "http" => ""
listener.2240553862.lb_port: "443" => "0"
listener.2240553862.lb_protocol: "https" => ""
listener.2240553862.ssl_certificate_id: "arn:aws:iam::12345:server-certificate/dle-glossary-api-cert-sandbox009476f6b7..." => ""
listener.3057123346.instance_port: "80" => "80"
listener.3057123346.instance_protocol: "http" => "http"
listener.3057123346.lb_port: "80" => "80"
listener.3057123346.lb_protocol: "http" => "http"
listener.3057123346.ssl_certificate_id: "" => ""
listener.~1222724879.instance_port: "" => "80"
listener.~1222724879.instance_protocol: "" => "http"
listener.~1222724879.lb_port: "" => "443"
listener.~1222724879.lb_protocol: "" => "https"
listener.~1222724879.ssl_certificate_id: "" => "${var.ssl_certificate_arn}"
-/+ module.iam.aws_iam_server_certificate.cert
arn: "arn:aws:iam::12345:server-certificate/dle-glossary-api-cert-sandbox009476f6b7..." => "<computed>"
certificate_body: "x" => "y" (forces new resource)
certificate_chain: "z" => "q"
name: "dle-glossary-api-cert-sandbox009476f6b7..." => "<computed>"
name_prefix: "dle-glossary-api-cert-sandbox" => "dle-glossary-api-cert-sandbox"
path: "/" => "/"
private_key: "xyz" => "zxy" (forces new resource)
The error is:
aws_iam_server_certificate.cert (deposed #0): DeleteConflict:
Certificate: ASCAI25L32IRFVVIZQNIQ is currently in use by
arn:aws:elasticloadbalancing:us-east-1:12345:loadbalancer/dle-glossary-api-sandbox. Please
remove it first before deleting it from IAM. status code: 409,

Configure logstash to read logs from Amazon S3 bucket

I have been trying to configure logstash to read logs which are getting generated in my amazon S3 bucket, but have not been successful. Below are the details :
I have installed logstash on an ec2 instance
My logs are all gz files in the s3 bucket
The conf file looks like below :
input {
s3 {
access_key_id => "MY_ACCESS_KEY_ID"
bucket => "MY_BUCKET"
region => "MY_REGION"
secret_access_key => "MY_SECRET_ACESS_KEY"
prefix => "/"
type => "s3"
add_field => { source => gzfiles }
}
}
filter {
if [type] == "s3" {
csv {
columns => [ "date", "time", "x-edge-location", "sc-bytes", "c-ip", "cs-method", "Host", "cs-uri-stem", "sc-status", "Referer", "User-Agent", "cs-uri-query", "Cookie", "x-edge-result-type", "x-edge-request-id" ]
}
}
if([message] =~ /^#/) {
drop{}
}
}
output {
elasticsearch {
host => "ELASTICSEARCH_URL" protocol => "http"
}
}