I am trying to create a Elastic Beanstalk in AWS, and need to enable access logs for the load balancer that the Beanstalk would create. I could not find any examples on the Terraform official documention where I could enable this feature via Terraform code
resource "aws_elastic_beanstalk_application" "tftest" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
name = "tf-test-name"
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
}
I am trying to enable access logs for the load balancer created by Beanstalk but there is no mention of such feature in Terraform documentation.
You need to use option settings for Elastic Beanstalk [1]:
resource "aws_elastic_beanstalk_environment" "some_env" {
name = "tf-test-name"
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
setting {
namespace = "aws:elbv2:loadbalancer"
name = "AccessLogsS3Bucket"
value = "<valid S3 bucket name>"
}
setting {
namespace = "aws:elbv2:loadbalancer"
name = "AccessLogsS3Enabled"
value = "true"
}
}
Using the same logic, you can optionally define the AccessLogsS3Prefix setting, but it is not required.
[1] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elbloadbalancer
Related
I have some elastic beanstalk application running using terraform and I want to encrpyt its volumes with Customer managed keys from the KMS service
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
name = "tf-test-name"
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "vpc-xxxxxxxx"
}
I already have a module that creates the CMK key. So how can I go about doing this in terraform?
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless#setting_up_regional_routing
I setup a GCE global load balancer and NEGs. What is unclear to me is how a NEG connects to a cloud run app. It looks like the Service name of a NEG just needs to match a corresponding cloud run app name.
I have done this but it appears it's not connected. I can't even find in the docs how to troubleshoot this linkage.
Created a neg via Terraform
resource "google_compute_region_network_endpoint_group" "neg" {
network_endpoint_type = "SERVERLESS"
region = us-east1
cloud_run {
service = "my-cloudrun-app"
}
}
Then deployed a cloud run app
gcloud run deploy my-cloudrun-app --region us-east1
My understanding is if the cloud run app name matches the service name it should connect to it. I can see the NEGs are connected to my GCE load balancer and the cloud run app was deployed successfully, but the NEG doesn't appear to be routing to my function.
I'm using this official GCP module to hook this up (it actually does make it pretty easy!) https://github.com/terraform-google-modules/terraform-google-lb-http/tree/v6.0.1/modules/serverless_negs
I found it does work the way I expected it to, the issue was just that I didn't have a cloud run app behind one of the regional NEGs I created (I thought I did). I actually created several regional NEGs, made kind of a mess, and the regional NEG the LB was routing my traffic to didn't have a corresponding cloud run app it pointed to.
How I was able to troubleshoot this:
Find the backend the load balancer was configured with
In GCP console I was able to view the backend and all the regional NEGs configured for it
Hit refresh/curl a bunch of times and saw in the gcp console on the backend's page one of the regional NEGs was actually receiving traffic- so I was at least able to see which NEG my traffic was being routed to
Realized I didn't deploy a cloud run app with a name that regional NEG was configured for
I still feel like visibility into how all these components play together could be better, but the traffic flow diagram for the Backend service details page was a life saver!
I don't know if you did that, but you need more to deploy your neg on a load balancer. Here the missing pieces
resource "google_compute_managed_ssl_certificate" "default" {
name = "cert"
managed {
domains = ["${var.domain}"]
}
}
resource "google_compute_backend_service" "default" {
name = "app-backend"
protocol = "HTTP"
port_name = "http"
timeout_sec = 30
backend {
group = google_compute_region_network_endpoint_group.neg.id
}
}
resource "google_compute_url_map" "default" {
name = "app-urlmap"
default_service = google_compute_backend_service.default.id
}
resource "google_compute_target_https_proxy" "default" {
name = "https-proxy"
url_map = google_compute_url_map.default.id
ssl_certificates = [
google_compute_managed_ssl_certificate.default.id
]
}
resource "google_compute_global_forwarding_rule" "default" {
name = "lb"
target = google_compute_target_https_proxy.default.id
port_range = "443"
ip_address = google_compute_global_address.default.address
}
resource "google_compute_global_address" "default" {
name = "address"
}
Easy?? Absolutely not. Let me know if you need more details, guidances or explanation.
I would like to deploy a local zip file to Elastic Beanstalk using Terraform. I would also like to keep old versions of the application in S3, with some retention policy, such as keep for 90 days. If I rebuild the bundle, I would like Terraform to detect this and deploy the new version. If the hash of the bundle hasn't changed then Terraform should not change anything.
Here is (some of) my config:
resource "aws_s3_bucket" "application" {
bucket = "test-elastic-beanstalk-bucket"
}
locals {
user_interface_bundle_path = "${path.module}/../../build.zip"
}
resource "aws_s3_bucket_object" "user_interface_latest" {
bucket = aws_s3_bucket.application.id
key = "user-interface-${filesha256(local.user_interface_bundle_path)}.zip"
source = local.user_interface_bundle_path
}
resource "aws_elastic_beanstalk_application" "user_interface" {
name = "${var.environment}-user-interface-app"
}
resource "aws_elastic_beanstalk_application_version" "user_interface_latest" {
name = "user-interface-${filesha256(local.user_interface_bundle_path)}"
application = aws_elastic_beanstalk_application.user_interface.name
bucket = aws_s3_bucket_object.user_interface_latest.bucket
key = aws_s3_bucket_object.user_interface_latest.key
}
resource "aws_elastic_beanstalk_environment" "user_interface" {
name = "${var.environment}-user-interface-env"
application = aws_elastic_beanstalk_application.user_interface.name
solution_stack_name = "64bit Amazon Linux 2018.03 v4.15.0 running Node.js"
version_label = aws_elastic_beanstalk_application_version.user_interface_latest.name
}
The problem with this is that each time the hash of the bundle changes, it deletes the old object in S3.
How can I get Terraform to create a new aws_s3_bucket_object and not delete the old one?
This is related but I don't want to maintain build numbers Elastic Beanstalk Application Version in Terraform
Expanding on #Marcin comment...
You should enable bucket versioning and add a lifecycle rule to delete versions older than 90 days
Here is an example:
resource "aws_s3_bucket" "application" {
bucket = "test-elastic-beanstalk-bucket"
versioning {
enabled = true
}
lifecycle_rule {
id = "retention"
noncurrent_version_expiration {
days = 90
}
}
}
You can see more examples in the documentation:
https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#using-object-lifecycle
Then I would simplify your aws_s3_bucket_object since we have versioning we don't really need to do the filesha256 just use the original name build.zip and good to go.
If you don't want to enable bucket versioning another way would be to use the AWS CLI to upload the file before you call terraform or do it in a local-exec from a null_resource here are a couple of examples:
https://www.terraform.io/docs/provisioners/local-exec.html#interpreter-examples
I am trying to spin up an ECS cluster with Terraform, but can not make EC2 instances register as container instances in the cluster.
I first tried with the verified module from Terraform, but this seems out dated (ecs-instance-profile has wrong path).
Then I tried with another module from anrim, but still no container instances. Here is the script I used:
provider "aws" {
region = "us-east-1"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.21.0"
name = "ecs-alb-single-svc"
cidr = "10.10.10.0/24"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.10.10.0/27", "10.10.10.32/27", "10.10.10.64/27"]
public_subnets = ["10.10.10.96/27", "10.10.10.128/27", "10.10.10.160/27"]
tags = {
Owner = "user"
Environment = "me"
}
}
module "ecs_cluster" {
source = "../../modules/cluster"
name = "ecs-alb-single-svc"
vpc_id = module.vpc.vpc_id
vpc_subnets = module.vpc.private_subnets
tags = {
Owner = "user"
Environment = "me"
}
}
I then created a new ecs cluster (from the aws console) on the same VPC and carefully compared the differences in resources. I managed to find some small differences, fixed them and tried again. But still no container instances!
A fork of the module is available here.
Can you see instances being created in the autoscaling group? If so, I'd suggest SSHing to one of them (either directly or using a bastion host, eg. see this module) and checking ECS agent logs. In my experience those problems are usually related to IAM policies, and that's pretty visible in logs but YMMV.
I am using Terraform to successfully spin up some Elastic Beanstalk apps (Single Docker configuration) and enable auto-scaling as part of the app / environment creation.
This works fine in most regions I’ve tried, but when I try to spin it up in London (eu-west-2) I get an error:
Error: Error applying plan:
1 error(s) occurred:
* aws_elastic_beanstalk_environment.my-service-env: 1 error(s) occurred:
* aws_elastic_beanstalk_environment.my-service-env: Error waiting for Elastic Beanstalk Environment (e-mt7f3i5bmq) to become ready: 2 error(s) occurred:
* 2018-06-11 19:31:29.28 +0000 UTC (e-mt7f3i5bmq) : Environment must have instance profile associated with it.
* 2018-06-11 19:31:29.39 +0000 UTC (e-mt7f3i5bmq) : Failed to launch environment.
I have found that if I manually attach the aws-elasticbeanstalk-ec2-role as the IamInstanceProfile it works fine - but this relies on the role having been automatically created previously...
Is there something about the eu-west-2 region which would mean the Beanstalk apps don’t get created with the instance profile as they do in other regions?
What am I missing?
Thanks for your help!
For others stuck on this issue I have found a solution by adding the instance profile directly as a setting. This instance profile doesn't get automatically added like it does when creating an elastic beanstalk through the console. Below is the full beanstalk environment resource defined:
resource "aws_elastic_beanstalk_environment" "beanstalkenvironment" {
name = "dev-example"
application = aws_elastic_beanstalk_application.beanstalkapp.name
solution_stack_name = "64bit Amazon Linux 2018.03 v2.14.1 running Docker 18.09.9-ce"
version_label = aws_elastic_beanstalk_application_version.beanstalkapplicationversion.name
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "InstanceType"
value = "t2.micro"
}
tags = {
Name = "test"
Environment = "test"
}
}
The exact setting used to fix this error was:
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
To find what the value "aws-elasticbeanstalk-ec2-role" that's required I checked an existing elastic beanstalk instance that was created through the console. Under the environment, in configuration there is a security section. The role name needed is listed as "IAM instance profile". Hopefully this helps others who get stuck on this issue.