I am trying to run EC2 instance on LocalStack using Terraform.
After 50 minutes of trying to create the instance
I got this response from terraform apply:
Error: error getting EC2 Instance (i-cf4da152ddf3500e1) Credit
Specifications: SerializationError: failed to unmarshal error message
status code: 500, request id: caused by: UnmarshalError: failed to
unmarshal error message caused by: expected element type <Response>
but have <title>
on main.tf line 34, in resource "aws_instance" "example": 34:
resource "aws_instance" "example" {
For LocalStack and Terraform v0.12.18 I use this configuration:
provider "aws" {
access_key = "mock_access_key"
region = "us-east-1"
s3_force_path_style = true
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4567"
cloudformation = "http://localhost:4581"
cloudwatch = "http://localhost:4582"
dynamodb = "http://localhost:4569"
es = "http://localhost:4578"
firehose = "http://localhost:4573"
iam = "http://localhost:4593"
kinesis = "http://localhost:4568"
lambda = "http://localhost:4574"
route53 = "http://localhost:4580"
redshift = "http://localhost:4577"
s3 = "http://localhost:4572"
secretsmanager = "http://localhost:4584"
ses = "http://localhost:4579"
sns = "http://localhost:4575"
sqs = "http://localhost:4576"
ssm = "http://localhost:4583"
stepfunctions = "http://localhost:4585"
sts = "http://localhost:4592"
ec2 = "http://localhost:4597"
}
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
When I run LocalStack with docker-compose up directly from newest github (https://github.com/localstack/localstack)
From logs I have seen that EC2 related endpoint was setup.
I appreciate any advice that would help me to run EC2 on LocalStack.
Working fine with below docker image of localstack.
docker run -it -p 4500-4600:4500-4600 -p 8080:8080 --expose 4572 localstack/localstack:0.11.1
resource "aws_instance" "web" {
ami = "ami-0d57c0143330e1fa7"
instance_type = "t2.micro"
tags = {
Name = "HelloWorld"
}
}
provider "aws" {
region = "us-east-1"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4567"
cloudformation = "http://localhost:4581"
cloudwatch = "http://localhost:4582"
dynamodb = "http://localhost:4569"
es = "http://localhost:4578"
firehose = "http://localhost:4573"
iam = "http://localhost:4593"
kinesis = "http://localhost:4568"
lambda = "http://localhost:4574"
route53 = "http://localhost:4580"
redshift = "http://LOCALHOST:4577"
s3 = "http://localhost:4572"
secretsmanager = "http://localhost:4584"
ses = "http://localhost:4579"
sns = "http://localhost:4575"
sqs = "http://localhost:4576"
ssm = "http://localhost:4583"
stepfunctions = "http://localhost:4585"
sts = "http://localhost:4592"
ec2 = "http://localhost:4597"
}
}
terraform apply
aws_instance.web: Destroying... [id=i-099392def6b574255]
aws_instance.web: Still destroying... [id=i-099392def6b574255, 10s elapsed]
aws_instance.web: Destruction complete after 10s
aws_instance.web: Creating...
aws_instance.web: Still creating... [10s elapsed]
aws_instance.web: Creation complete after 12s [id=i-9c942d138970d44a4]
Apply complete! Resources: 1 added, 0 changed, 1 destroyed.
Note: Its a dummy instance so won't be available for ssh and all. however, suitable for testing of terraform apply/destroy use case on ec2.
Related
When I'm creating EKS cluster with single nodepool using terraform, I'm facing the kubelet certificate problem, i.e. csr's are stuck in pending state like this:
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-8qmz5 4m57s kubernetes.io/kubelet-serving kubernetes-admin <none> Pending
csr-mq9rx 5m kubernetes.io/kubelet-serving kubernetes-admin <none> Pending
As we can see REQUESTOR here is kubernetes-admin, and I'm really not sure why.
My terrafrom code for cluster itself:
resource "aws_eks_cluster" "eks" {
name = var.eks_cluster_name
role_arn = var.eks_role_arn
version = var.k8s_version
vpc_config {
endpoint_private_access = "true"
endpoint_public_access = "true"
subnet_ids = var.eks_public_network_ids
security_group_ids = var.eks_security_group_ids
}
kubernetes_network_config {
ip_family = "ipv4"
service_ipv4_cidr = "10.100.0.0/16"
}
}
Terraform code for nodegroup:
resource "aws_eks_node_group" "aks-NG" {
depends_on = [aws_ec2_tag.eks-subnet-cluster-tag, aws_key_pair.eks-deployer]
cluster_name = aws_eks_cluster.eks.name
node_group_name = "aks-dev-NG"
ami_type = "AL2_x86_64"
node_role_arn = var.eks_role_arn
subnet_ids = var.eks_public_network_ids
capacity_type = "ON_DEMAND"
instance_types = var.eks_nodepool_instance_types
disk_size = "50"
scaling_config {
desired_size = 2
max_size = 2
min_size = 2
}
tags = {
Name = "${var.eks_cluster_name}-node"
"kubernetes.io/cluster/${var.eks_cluster_name}" = "owned"
}
remote_access {
ec2_ssh_key = "eks-deployer-key"
}
}
Per my understanding it's very basic configuration.
Now, when I'm creating cluster and nodegroup via AWS management console with exactly SAME parameters, i.e. cluster IAM role and nodegroup IAM roles are same as for Terraform, everything is fine:
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-86qtg 6m20s kubernetes.io/kubelet-serving system:node:ip-172-31-201-140.ec2.internal <none> Approved,Issued
csr-np42b 6m43s kubernetes.io/kubelet-serving system:node:ip-172-31-200-199.ec2.internal <none> Approved,Issued
But here, certificate requestor it's node itself (per my understanding). So I would like to know what's the problem is here? Why requestor is different in this case, what's the difference between creating of these resources from AWS management console and using terraform, and how do I manage this issue? Please help.
UPD.
I found that this problem appears when I'm creating cluster using terraform via assumed role created for terraform.
When i'm creating the cluster using terraform with regular IAM user credentials, with same permissions set everything is fine.
It doesn't gives any answer regarding the root casue, but still, it's something to consider.
Right now it seems like weird EKS bug.
I am testing my AWS terraform configuration with LocalStack. The final goal is to make a queue listen to my topic.
I am running Localstack with the following command:
docker run --rm -it -p 4566:4566 localstack/localstack
After running the command terraform destroy I get the error message:
aws_sns_topic_subscription.subscription: Destroying... [id=arn:aws:sns:us-east-1:000000000000:topic:a0d47652-3ae4-46df-9b63-3cb6e154cfcd]
╷
│ Error: error waiting for SNS topic subscription (arn:aws:sns:us-east-1:000000000000:topic:a0d47652-3ae4-46df-9b63-3cb6e154cfcd) deletion: InvalidParameter: Unable to find subscription for given ARN
│ status code: 400, request id: 2168e636
│
│
╵
I have run the code against the real AWS without a problem.
Here is the code for the terraform file
terraform {
required_version = ">= 0.12.26"
}
provider "aws" {
region = "us-east-1"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
}
}
resource "aws_sqs_queue" "queue" {
name = "queue"
}
resource "aws_sns_topic" "topic" {
name = "topic"
}
resource "aws_sns_topic_subscription" "subscription" {
endpoint = aws_sqs_queue.queue.arn
protocol = "sqs"
topic_arn = aws_sns_topic.topic.arn
}
Sadly this is an issue with AWS, you have to create a ticket look here and https://stackoverflow.com/a/64568018/6085193
"When you delete a topic, subscriptions to the topic will not be "deleted" immediately, but become orphans. SNS will periodically clean these orphans, usually every 10 hours, but not guaranteed. If you create a new topic with the same topic name before these orphans are cleared up, the new topic will not inherit these orphans. So, no worry about them"
This has been fixed with issue:
https://github.com/localstack/localstack/issues/4022
I've been trying to create a local development environment to play with terraform with localstack (https://github.com/localstack/localstack) running on docker.
I was already able to create a S3 bucket to store the terraform state, but I also wanted to simulate the DynamoDB as lock.
The configuration is:
localstack docker-compose.yml:
version: "3.2"
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
ports:
- "4563-4599:4563-4599"
- "8080:8080"
environment:
- DATA_DIR=/tmp/localstack/data
- DEBUG=1
volumes:
- "./.localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
First terraform:
Used as initial bootstrap to create s3 tfstate storage and DynamoDB table for tfstate lock.
provider "aws" {
region = "us-east-1"
access_key = "foo"
secret_key = "bar"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "terraform-state"
acl = "private"
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_public_access_block" "terraform_state_access" {
bucket = aws_s3_bucket.terraform_state.id
block_public_acls = true
ignore_public_acls = true
block_public_policy = true
restrict_public_buckets = true
}
resource "aws_dynamodb_table" "terraform_state_lock" {
name = "terraformlock"
read_capacity = 5
write_capacity = 5
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
Second terraform:
Creates resources and will store the state in s3 and use DynamoDB to create lock.
terraform {
backend "s3" {
bucket = "terraform-state"
key = "main/terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraformlock"
encrypt = true
}
}
provider "aws" {
region = "us-east-1"
access_key = "foo"
secret_key = "bar"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
ec2 = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
resource "aws_sqs_queue" "test" {
name = "test"
tags = {
"Environment" = "dev"
}
}
resource "aws_sns_topic" "test" {
name = "test"
display_name = "test"
}
Whenever I apply the second terraform, I get this error:
❯ terraform apply
Acquiring state lock. This may take a few moments...
Error: Error locking state: Error acquiring the state lock: 2 errors occurred:
* UnrecognizedClientException: The security token included in the request is invalid.
status code: 400, request id: UEGJV0SQ614NIEDRB93IAF0JQ7VV4KQNSO5AEMVJF66Q9ASUAAJG
* UnrecognizedClientException: The security token included in the request is invalid.
status code: 400, request id: U1IRF6CHGK7RM4SQEGVCSU699RVV4KQNSO5AEMVJF66Q9ASUAAJG
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.
Anyone have ever tried this or would have idea about what is causing it?
This probably happens because you are trying to use real DynamoDB, rather then from localstack. To use localstack, you have to add
dynamodb_endpoint = "http://localhost:4566"
to your backend.S3 configuration. Once you updated your backend setup, you will have to reinitialize your TF using terraform init.
Updated :
We have a private cloud hosted in our datacenter which is stripped down version of AWS. We have exposed EC2 API's to allow users to create VM's using awscli.
I am trying to create VM's using Terraform and for initial tests i created a .tf file as below:
provider "aws" {
access_key = "<key>"
secret_key = "<key>"
region = "us-west-1"
skip_credentials_validation = true
endpoints
{
ec2 = "https://awsserver/services/api/aws/ec2"
}
}
resource "aws_instance" "Automation" {
ami = "ami-100011201"
instance_type = "c3.xlarge"
subnet_id = "subnet1:1"
}
This is the error message after running terraform plan
Error: Error running plan: 1 error(s) occurred:
* provider.aws: AWS account ID not previously found and failed retrieving via all available methods. See https://www.terraform.io/docs/providers/aws/index.html#skip_requesting_account_id for workaround and implications. Errors: 2 errors occurred:
* error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 58f9d498-6259-11e9-b146-95598aa219b5
* failed getting account information via iam:ListRoles: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: c10f8a06-58b4-4d0c-956a-5c8c684664ea
We haven't implemented sts and the query always goes to the AWS cloud instead of the private cloud API server.
What am I missing?
This worked for me to create a vm.
provider "aws" {
access_key = "<key>"
secret_key = "<key>"
region = "us-west-1"
skip_credentials_validation =true
skip_requesting_account_id = true
skip_metadata_api_check = true
endpoints
{
ec2 = "https://awsserver/services/api/aws/ec2"
}
}
resource "aws_instance" "Automation" {
ami = "ami-100011201"
instance_type = "c3.xlarge"
subnet_id = "subnet1:1"
}
It creates a VM, however the command errors out with
aws_instance.Automation: Still creating... (1h22m4s elapsed)
aws_instance.Automation: Still creating... (1h22m14s elapsed)
aws_instance.Automation: Still creating... (1h22m24s elapsed)
Error: Error applying plan:
1 error(s) occurred:
* aws_instance.Automation: 1 error(s) occurred:
* aws_instance.Automation: Error waiting for instance (i-101149362) to become ready: timeout while waiting for state to become 'running' (last state: 'pending', timeout: 10m0s)
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
I am provisioning the AWS Spot-fleet using Terraform.
The Spot-fleet gets provisioned successfully but the issue comes when I try to use ssh to connect to the instance so that I can run my Node.js App on it (pre-installed on the AMI). I am not able to retrieve the public_ip address of the instances which has been provisioned.
My request looks like this:
main.tf
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "${var.aws_region}"
}
resource "aws_spot_fleet_request" "spot_fleet" {
iam_fleet_role = "${var.iam_fleet_role}"
spot_price = "${var.spot_price}"
allocation_strategy = "${var.allocation_strategy}"
target_capacity = "${var.target_capacity}"
terminate_instances_with_expiration = "${var.terminate_instances_with_expiration}"
valid_until = "${var.valid_until}"
wait_for_fulfillment = "${var.wait_for_fulfillment}"
launch_specification {
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
ami = "${var.ami}"
vpc_security_group_ids = ["${var.security_group_id}"]
root_block_device {
volume_size = "${var.volume_size}"
volume_type = "${var.volume_type}"
}
}
provisioner "remote-exec" {
inline = [
"cd my-app",
"npm install",
"npm start",
]
connection {
host = "${self.public_ip}"
type = "ssh"
user = "ubuntu"
private_key = "${file("my-secret.pem")}"
timeout = "2m"
agent = false
}
}
}
This is the error which I get:
Error applying plan:
1 error(s) occurred:
* aws_spot_fleet_request.spot_fleet: 1 error(s) occurred:
* Resource 'aws_spot_fleet_request.spot_fleet' does not have attribute 'public_ip' for variable 'aws_spot_fleet_request.spot_fleet.0.public_ip'
The provisioned spot instances as such do have the Public IP Address and I am able to see it in AWS Console.
I am also able to SSH into the instance using Terminal.