How to delete server certificate pointed to by load balancer? - amazon-web-services

I'm trying to upgrade some certificates on my terraform-managed infrastructure. The tf plan is to delete an aws_elb.load_balancer that is using the old aws_iam_server_certificate, and then create a new aws_iam_server_certificate, and point a new listener at it. But when the plan is applied, the listener is never deleted, so the old cert cannot be deleted.
I know the listener is not deleted because when I run terraform plan again I see that the old listener still needs to be destroyed.
How can I convince terraform to destroy this listener and the old cert, and create the new cert and listener, all by doing a simple terraform apply, that is, without manually calling terraform destroy?
plan
~ module.ecs.aws_ecs_service.service
task_definition: "arn:aws:ecs:us-east-1:12345:task-definition/dle-glossary-api-sandbox:20" => "${aws_ecs_task_definition.task_definition.arn}"
-/+ module.ecs.aws_ecs_task_definition.admin_task_definition
arn: "arn:aws:ecs:us-east-1:12345:task-definition/dle-glossary-api-sandbox-admin:20" => "<computed>"
container_definitions: "93b15fbec63f6cae8389cc6befa505890002ec4f" => "abf62f02c60dbfa30952def0eb69fec96b455205" (forces new resource)
family: "dle-glossary-api-sandbox-admin" => "dle-glossary-api-sandbox-admin"
network_mode: "" => "<computed>"
revision: "20" => "<computed>"
-/+ module.ecs.aws_ecs_task_definition.task_definition
arn: "arn:aws:ecs:us-east-1:12345:task-definition/dle-glossary-api-sandbox:20" => "<computed>"
container_definitions: "9e38e676174426b7c8179446f788d7eeffa90583" => "92fd7350f9798461d78f80bfca4fccea6cea68db" (forces new resource)
family: "dle-glossary-api-sandbox" => "dle-glossary-api-sandbox"
network_mode: "" => "<computed>"
revision: "20" => "<computed>"
~ module.ecs.aws_elb.load_balancer
needs to delete listener 2240553862, create a listener. This should free up the cert resource. Why can't we delete it?
listener.2240553862.instance_port: "80" => "0"
listener.2240553862.instance_protocol: "http" => ""
listener.2240553862.lb_port: "443" => "0"
listener.2240553862.lb_protocol: "https" => ""
listener.2240553862.ssl_certificate_id: "arn:aws:iam::12345:server-certificate/dle-glossary-api-cert-sandbox009476f6b7..." => ""
listener.3057123346.instance_port: "80" => "80"
listener.3057123346.instance_protocol: "http" => "http"
listener.3057123346.lb_port: "80" => "80"
listener.3057123346.lb_protocol: "http" => "http"
listener.3057123346.ssl_certificate_id: "" => ""
listener.~1222724879.instance_port: "" => "80"
listener.~1222724879.instance_protocol: "" => "http"
listener.~1222724879.lb_port: "" => "443"
listener.~1222724879.lb_protocol: "" => "https"
listener.~1222724879.ssl_certificate_id: "" => "${var.ssl_certificate_arn}"
-/+ module.iam.aws_iam_server_certificate.cert
arn: "arn:aws:iam::12345:server-certificate/dle-glossary-api-cert-sandbox009476f6b7..." => "<computed>"
certificate_body: "x" => "y" (forces new resource)
certificate_chain: "z" => "q"
name: "dle-glossary-api-cert-sandbox009476f6b7..." => "<computed>"
name_prefix: "dle-glossary-api-cert-sandbox" => "dle-glossary-api-cert-sandbox"
path: "/" => "/"
private_key: "xyz" => "zxy" (forces new resource)
The error is:
aws_iam_server_certificate.cert (deposed #0): DeleteConflict:
Certificate: ASCAI25L32IRFVVIZQNIQ is currently in use by
arn:aws:elasticloadbalancing:us-east-1:12345:loadbalancer/dle-glossary-api-sandbox. Please
remove it first before deleting it from IAM. status code: 409,

Related

Naming an AWS EC2 security group IP permissions rule

I am using the AWS PHP SDK version 3. I am able to create security groups using the API, as well as creating IP Permission rules. What I can't figure out is how give the IP Permissions rule a name.
Here's what I have:
$params =
[
'Description' => 'My Security Group',
'GroupName' => 'my_security_group',
'VpcId' => 'vpc-a9d2h3d7',
'TagSpecifications' => [
[
'ResourceType' => 'security-group',
'Tags' =>
[
['Key' => 'Name', 'Value' => 'My Security Group']
]
]
],
];
$Ec2Client->createSecurityGroup($params);
At this point the group is created
Then I create an IP Permissions rule:
$ip_permissions = [
'GroupName' => 'my_security_group',
'FromPort' => 0,
'ToPort' => 65535,
'IpProtocol' => 'tcp',
'IpRanges' => [['CidrIp' => 'xx.xxx.xx.xxxx/32', 'Description' => 'Main Office']],
];
$Ec2Client->authorizeSecurityGroupIngress($ip_permissions);
Through the AWS Console, I can see that the rule is created, but the Name column is empty. How do I create the Name through the API?
It would be same, by using TagSpecifications. But instead of security-group you need to have security-group-rule:
'TagSpecifications' => [
[
'ResourceType' => 'security-group-rule',
'Tags' =>
[
['Key' => 'Name', 'Value' => 'My Security Group Rule']
]
]
]
Full example in AWS CLI (don't have php):
aws ec2 authorize-security-group-ingress --group-id sg-00102bde0b55e29fe --ip-permissions FromPort=0,IpProtocol=tcp,IpRanges='[{CidrIp=10.10.10.10/32,Description="Main Office"}]',ToPort=65535 --tag-specifications ResourceType=security-group-rule,Tags='[{Key=Name,Value=MyName}]'

How to encrypt S3 bucket using Terraform

I am trying to create encrypted S3 bucket. After I execute terraform apply, it all looks good, but when I look at the bucket in the AWS Console, it's not encrypted. I am also aware of the previous question.
Here is my terraform version:
Terraform v0.11.13
+ provider.aws v2.2.0
Here is my tf file:
resource "aws_s3_bucket" "test-tf-enc" {
bucket = "test-tf-enc"
acl = "private"
tags {
Name = "test-tf-enc"
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
This is the output after I execute the command:
aws_s3_bucket.test-tf-enc: Creating...
acceleration_status: "" => "<computed>"
acl: "" => "private"
arn: "" => "<computed>"
bucket: "" => "test-tf-enc"
bucket_domain_name: "" => "<computed>"
bucket_regional_domain_name: "" => "<computed>"
force_destroy: "" => "false"
hosted_zone_id: "" => "<computed>"
region: "" => "<computed>"
request_payer: "" => "<computed>"
server_side_encryption_configuration.#: "" => "1"
server_side_encryption_configuration.0.rule.#: "" => "1"
server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.#: "" => "1"
server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.0.sse_algorithm: "" => "AES256"
tags.%: "" => "1"
tags.Name: "" => "test-tf-enc"
versioning.#: "" => "<computed>"
website_domain: "" => "<computed>"
website_endpoint: "" => "<computed>"
aws_s3_bucket.test-tf-enc: Still creating... (10s elapsed)
aws_s3_bucket.test-tf-enc: Creation complete after 10s (ID: test-tf-enc)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Works as expected.
Using different user without sufficient permissions to validate operation through UI in AWS Management Console resulted in the confusion. Insufficient permissions message in UI only visible after expanding the Encryption pane.
Use aws cli for troubleshooting to reduce the problem surface.

why doesn't a target group attachment work like a volume attachment

When using count(3) on ec2_instance, aws_volume_attachment, and aws_lb_target_group_attachment, I can terminate a single ec2_instance, and on the next terraform apply it will create just the single missing instance, the single missing volume attachment, but it will attempt to delete and recreate all 3 target group attachments. This results in all 3 instances being briefly unhealthy in the target group, meaning all 3 get sent requests even though 2 of them are healthy and one isn't.
resource "aws_volume_attachment" "kafka_att" {
count = "${var.zookeeper-max}"
device_name = "/dev/sdh"
volume_id =
"${element(aws_ebs_volume.kafkaVolumes.*.id,count.index)}"
instance_id =
"${element(aws_instance.zookeeper.*.id,count.index)}"
depends_on =
["aws_instance.zookeeper","aws_ebs_volume.kafkaVolumes"]
lifecycle {
ignore_changes = ["aws_instance.zookeeper"]
}
}
resource "aws_lb_target_group_attachment" "schemaRegistryTgAttach" {
count = "${var.zookeeper-max}"
target_group_arn =
"${aws_alb_target_group.KafkaSchemaRegistryTG.arn}"
target_id = "${element(aws_instance.zookeeper.*.id,count.index)}"
depends_on = ["aws_instance.zookeeper"]
lifecycle {
ignore_changes = ["aws_instance.zookeeper"]
}
}
resource "aws_instance" "zookeeper" {
count = "${var.zookeeper-max}"
...
blah blah
}
So if I terminate instance 1 , I would expect the next terraform apply to re-create zookeeper[1], kafka_att[1], and schemaRegistryTgAttach[1].
The code above creates the instance[1] and the volume_attachment[1], but no target_group_attachments. If I remove the lifecycle block from the target_group_attachments, it then deletes and re-creates all 3 ??? How can I change it so that if creating a single ec2 instance, it only creates a single target group attachment ?
If I try using the same method as for the volume attachment...
resource "aws_lb_target_group_attachment" "schemaRegistryTgAttach" {
count = "${var.zookeeper-max}"
target_group_arn =
"${aws_alb_target_group.KafkaSchemaRegistryTG.arn}"
target_id = "${element(aws_instance.zookeeper.*.id,count.index)}"
depends_on = ["aws_instance.zookeeper"]
lifecycle {
ignore_changes = ["aws_instance.zookeeper"]
}
}
then it creates no TG attachments, but does create the correct volume attachment...the plan output is :-
+ aws_instance.zookeeper[0]
id: <computed>
ami: "ami-09693313102a30b2c"
arn: <computed>
associate_public_ip_address: "false"
availability_zone: <computed>
cpu_core_count: <computed>
cpu_threads_per_core: <computed>
credit_specification.#: "1"
credit_specification.0.cpu_credits: "unlimited"
disable_api_termination: "true"
ebs_block_device.#: <computed>
ebs_optimized: "false"
ephemeral_block_device.#: <computed>
get_password_data: "false"
host_id: <computed>
iam_instance_profile: "devl-ZOOKEEPER_IAM_PROFILE"
instance_state: <computed>
instance_type: "t3.small"
ipv6_address_count: <computed>
ipv6_addresses.#: <computed>
key_name: "devl-key"
monitoring: "false"
network_interface.#: <computed>
network_interface_id: <computed>
password_data: <computed>
placement_group: <computed>
primary_network_interface_id: <computed>
private_dns: <computed>
private_ip: <computed>
public_dns: <computed>
public_ip: <computed>
root_block_device.#: "1"
root_block_device.0.delete_on_termination: "true"
root_block_device.0.volume_id: <computed>
root_block_device.0.volume_size: "16"
root_block_device.0.volume_type: "gp2"
security_groups.#: <computed>
subnet_id: "subnet-5b8d8200"
tags.%: "3"
tags.Description: "Do not terminate more than
one at a time"
tags.Env: "devl"
tags.Name: "devl-zookeeper-0"
tenancy: <computed>
user_data:
"70fd2ae9f7da42e2fb15328cd6539c4f7ed4a5be"
volume_tags.%: <computed>
vpc_security_group_ids.#: "1"
vpc_security_group_ids.3423986071: "sg-03911aa28dbcb3f20"
+ aws_volume_attachment.kafka_att[0]
id: <computed>
device_name: "/dev/sdh"
instance_id:
"${element(aws_instance.zookeeper.*.id,count.index)}"
volume_id: "vol-021d1530117f31905"
if however I remove the lifecycle block on the target group attachment, it attempts to destroy and recreate all 3 target group attachments.
+ aws_instance.zookeeper[0]
id: ami: "ami-09693313102a30b2c"
arn:
associate_public_ip_address: "false"
availability_zone:
cpu_core_count:
cpu_threads_per_core:
credit_specification.#: "1"
credit_specification.0.cpu_credits: "unlimited"
disable_api_termination: "true"
ebs_block_device.#:
ebs_optimized: "false"
ephemeral_block_device.#:
get_password_data: "false"
host_id:
iam_instance_profile: "devl-ZOOKEEPER_IAM_PROFILE"
instance_state:
instance_type: "t3.small"
ipv6_address_count: ipv6_addresses.#:
key_name: "devl-key"
monitoring: "false"
network_interface.#:
network_interface_id:
password_data:
placement_group:
primary_network_interface_id:
private_dns: private_ip: public_dns:
public_ip:
root_block_device.#: "1"
root_block_device.0.delete_on_termination: "true"
root_block_device.0.volume_id:
root_block_device.0.volume_size: "16"
root_block_device.0.volume_type: "gp2"
security_groups.#:
source_dest_check: "false"
subnet_id: "subnet-5b8d8200"
tags.%: "3"
tags.Description: "Do not terminate more than one at a time"
tags.Env: "devl"
tags.Name: "devl-zookeeper-0"
tenancy:
user_data: "70fd2ae9f7da42e2fb15328cd6539c4f7ed4a5be"
volume_tags.%:
vpc_security_group_ids.#: "1"
vpc_security_group_ids.3423986071: "sg-03911aa28dbcb3f20"
-/+ aws_lb_target_group_attachment.SchemaRegistryTgAttach[0] (new
resource required)
id: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034-20190218210336558900000001" => (forces new resource)
port: "8081" => "8081"
target_group_arn: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034" => "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034" target_id: "i-03ed28ab175c0f684" => "${element(aws_volume_attachment.kafka_att.*.instance_id,count.index)}" (forces new resource)
-/+ aws_lb_target_group_attachment.SchemaRegistryTgAttach[1] (new resource required) id: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034-20190218210336576900000002" => (forces new resource)
port: "8081" => "8081"
target_group_arn: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034" => "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034"
target_id: "i-0b39bd7244f32809f" => "${element(aws_volume_attachment.kafka_att.*.instance_id,count.index)}" (forces new resource)
-/+ aws_lb_target_group_attachment.SchemaRegistryTgAttach[2] (new resource required)
id: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034-20190218210336671000000003" => (forces new resource)
port: "8081" => "8081"
target_group_arn: "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034" => "arn:aws:elasticloadbalancing:eu-west-1:544611607123:targetgroup/devl-KafkaSchemaRegistryTG/46193714a87ea034"
target_id: "i-0bbd8d3a10890b94c" => "${element(aws_volume_attachment.kafka_att.*.instance_id,count.index)}" (forces new resource)
+ aws_volume_attachment.kafka_att[0]
id:
device_name: "/dev/sdh"
instance_id: "${element(aws_instance.zookeeper.*.id,count.index)}"
volume_id: "vol-021d1530117f31905"
How can I make it behave like the volume attachment... so that if instance 3 has died, tf apply will create volume attachment 3 and only TG attachment 3.

Ruby AWS SDK (v2/v3) Tag spot instances

According to this link it is possible to tag spot fleet instances. Tags are automatically propagated to the launched instances. Is it possible to do the same for normal spot instances? My approach so far
ec2 = Aws::EC2::Resource.new({region: region, credentials: creds})
launch_specification ={
:security_groups => ['ccc'],
:ebs_optimized => true,
:image_id => "image_id",
:instance_type => "type",
:key_name => "key",
:placement => {:group_name => "ggg"},
:user_data => ""
}
resp = ec2.client.request_spot_instances(:instance_count => count,
:launch_specification => launch_specification,
:spot_price => price.to_s,
:type => 'one-time',
:dry_run => false
)
resp.spot_instance_requests.each do |sir|
ec2.create_tags({
dry_run: false,
resources: [sir.spot_instance_request_id],
tags: [
{
key: "owner",
value: "ooo",
},
],
})
end
Tags are created for for the spot_instance_request, but are not propagated to the launched instances.

Elasticsearch AWS with Elastica

Is it possiple to Connect to an Amazon Elasticsearch with Elastica and the "AWS Account access policy"?
When i use "Allow open access to the domain" it works.
$elasticaClient = new \Elastica\Client([
'connections' => [
[
'transport' => 'Https',
'host' => 'search-xxxxxxxx-zzzzzzzz.us-west-2.es.amazonaws.com',
'port' => '',
'curl' => [
CURLOPT_SSL_VERIFYPEER => false,
],
],
],
]);
But in dont know how to set the "Authorization header requires" when i use the "AWS Account access policy".
I am using the FriendsOfSymfony FOSElasticaBundle for Symfony. I solved that problem using AwsAuthV4 as transport like this:
fos_elastica:
clients:
default:
host: "YOURHOST.eu-west-1.es.amazonaws.com"
port: 9200
transport: "AwsAuthV4"
aws_access_key_id: "YOUR_AWS_KEY"
aws_secret_access_key: "YOUR_AWS_SECRET"
aws_region: "eu-west-1"
This is not implemented yet as it needs more then just setting the headers. Best is to follow the issue here in the Elastica repository for progress: https://github.com/ruflin/Elastica/issues/948