My end goal is to create a 3 node kubeadm kubernetes cluster with terraform and/or ansible.
As of now I am provisioning three identical instances with terraform.
Then with remote-exec and inline installing packages that all instances share between themselves.
Now I want to install specific packages only on one of those three instances. Trying to achieve this using local-exec.
I am struggling with connecting only to 1 instance with local-exec. I know how to connect to all of them and execute playbook against three instances. But the end goal is to connect to one instance only.
the code snipped:
resource "aws_instance" "r100c96" {
count = 3
ami = "ami-0b9064170e32bde34"
instance_type = "t2.micro"
key_name = local.key_name
tags = {
Name = "terra-ans${count.index}"
}
provisioner "remote-exec" {
connection {
host = "${self.public_ip}"
type = "ssh"
user = local.ssh_user
private_key = file(local.private_key_path)
}
inline = ["sudo hostnamectl set-hostname test"]
}
provisioner "local-exec" {
command = "ansible-playbook -i ${element((aws_instance.r100c96.*.public_ip),0)}, --private-key ${local.private_key_path} helm.yaml"
}
...
}
Thanks,
You can use null_resource, and and run your remote-exec for selected instance only, once all three instances in aws_instance.r100c96 are provisioned.
I think instead of * use the count.index, on every loop run it will pass the specific VM IP.
Also, there are multiple ways to provision a VM using Ansible.
Consider if you can dynamically build your Hosts file and provision them in parallel instead one at a time.
Related
I need some help from Terraform export.
I'm going to create EC2 instance and install some packages on it using terraform.
To install packages, I used the provisiner of terraform. This is a EC2 instance part.
resource "aws_instance" "lms_server" {
ami = var.AMI
instance_type = var.instance_type
key_name = var.private_key
iam_instance_profile = aws_iam_instance_profile.instance_profile.name
associate_public_ip_address = true
subnet_id = aws_subnet.main-public-1.id
vpc_security_group_ids = [aws_security_group.security_rule.id]
provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"/tmp/script.sh ${var.gh_user} ${var.gh_token} ${var.gh_url} ${aws_db_instance.lms_mysql_db.address} ${var.db_name} ${var.db_username} ${var.db_password} ${aws_sqs_queue.lms_queue.id} ${var.sqs_name} ${self.public_ip} ${var.aws_region} ${var.bucket_name}",
]
}
connection {
type = "ssh"
host = "${self.public_ip}"
user = var.user_name
private_key = "${file("lms_key.pem")}"
}
root_block_device {
volume_size = var.volume_size
}
tags = {
lms_app = "lms_server"
}
}
As you can see here, I access the EC2 via SSH and copied script.sh file that includes all commands. Then run it. I think EC2 was created successfully and all packages were installed, but terraform CLI keeps ec2 instance: still creating status.
This means that the creation of EC2 instance is not finished yet. so If I drop this(Ctrl+C) and then run terraform apply again, it used to destroy and create instance from the first again. Also it's installing all packages again.
This operation is happening each time I update the terraform script for other, not EC2.
I'm looking forward to getting some help about this problem.
Thank you for your time and consideration.
Can you show us the scripts.sh content? Maybe there is something keep that script executing so that it can not finish.
You can try using user_data instead. It will run your script after the EC2 is created in the bootstrap phase and it is logged in the EC2.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#user_data
I have a shell script which i wanted to configure in AWS ec2 instance to run every hour. Iam using terraform to launch the ec2 instance. Is it possible to configure the shell script hourly execution through terraform itself while launching the ec2?
Yes, in the aws_instance resource you can use the user_data argument to execute a script at launch that registers a cron job that executes hourly:
resource "aws_instance" "foo" {
ami = "ami-005e54dee72cc1d00" # us-west-2
instance_type = "t2.micro"
...
user_data = <<-EOF
sudo service cron start
echo '0 * * * * date >> ~/somefile' | crontab
EOF
}
Ensure that NTP is configured on the instance and that you are using UTC for the system time.
Helpful links
AWS EC2 Documentation, User Data
POSIX crontab
Terraform AWS provider, EC2 instance user_data
As we are able to display predefined variables aws_instance.my-instance.public_ip values through output variables at the end of the execution of terraform apply.
Similar way, is there a way to output custom information from the new instance at the end such as output a system file any command output such echo hello! or cat /var/log/hello.log?
You can use Terraform Provisioners. They are basically interface to run commands and script to remote machine (or local depending over the provisioner) to achieve some tasks, which in most cases will be bootstrapping matters.
resource "aws_instance" "example" {
ami = "ami-b374d5a5"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo ${aws_instance.example.public_ip} > ip_address.txt"
}
}
You can read more about them here: https://learn.hashicorp.com/terraform/getting-started/provision
However, keep in mind that provisioners are terraform objects and not bound to instances, so they only execute when you use Terraform to spin up or edit instances. These bootstrapping scripts wont come into effect if your instance is created by an ASG during an scale-out operation or by an orchestration tool. For that purpose, using instance's user_data is the best option.
As #SajjadHashmi, you can use local-exec to run commands on your local host with some limitations.
Thus in desperation you can use ssh and scp commands on your local host to get the files from from the instance and execute commands there. This is not a very nice way, but as a measure of the last resort could be considered in some scenarios.
resource "aws_instance" "web" {
# other attributes
provisioner "local-exec" {
command = <<-EOL
# give time to instance to properly boot
sleep 30
# download a /var/log/cloud-init-output.log
# file from the instance to the host's /tmp folder
scp -i private_ssh_key -oStrictHostKeyChecking=no ubuntu#${self.public_ip}:/var/log/cloud-init-output.log /tmp
# execute command ls -a on the instance and save output to
# local file /tmp/output.txt
ssh -i private_ssh_key -oStrictHostKeyChecking=no ubuntu#${self.public_ip} ls -a >> /tmp/output.txt
EOL
}
}
I would like to start a task definition on an instance within my cluster (not in the default one). So something like:
create a cluster
create a task definition with a docker image (I have a docker image
already pushed to ecs)
run the task definition in the cluster
I would like to add a keypair to the ec2 instance for ssh access
I have tried to use these functions form boto3 (ec2, ecs)
create_cluster
run_task
register_container_instance
register_task_definition
run_instances
I managed to run an instance with run_instances, it works perfectly well but I want to run an instance in my cluster. Here is my code:
def run_instances():
response = ec2.run_instances(
BlockDeviceMappings=[
{
'DeviceName': '/dev/xvda',
'Ebs': {
'DeleteOnTermination': True,
'VolumeSize': 8,
'VolumeType': 'gp2'
},
},
],
ImageId='ami-06df494fbd695b854',
InstanceType='m3.medium',
MaxCount=1,
MinCount=1,
Monitoring={
'Enabled': False
})
return response
There is a running instance on ec2 console but it doesn't appear in any of the clusters in the ecs console (I tried it with an ecs-optimized ami and with a regular one).
I also tried to follow these steps for getting my system up and running in a cluster without success:
https://github.com/spulec/moto/blob/master/tests/test_ecs/test_ecs_boto3.py
Could you please help me find out what do I miss? Is there ant other setup have to make beside calling these SDK functions?
Thank you!
You will need to run an instance that uses ECS Optimized AMI since those AMIs have ECS agent preinstalled on them otherwise you would need to install ECS agent yourself and bake a custom AMI.
By default, your ECS optimized instance launches into your default cluster, but you can specify alternative cluster name in UserData property of run_instances function
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
The list of available ECS AMIs is available here
I have an issue with Terraform provisioning. When I run terraform first time I am using SSH key generated in AWS console. This key is being added to ubuntu user (it's Ubuntu 16.04 AMI). Then I run remote-exec provisioning:
provisioner "remote-exec" {
inline = [
"sudo apt -y update && sudo apt install -y python"
]
connection {
user = "ubuntu"
private_key = "${file("${var.aws_default_key_name}.pem")}"
}
}
I need python being installed so I can use Ansible later. That's the only place where I need this key, never more, because I create my own user with my private key. However, when I try to run terraform later it searches for a file file("${var.aws_default_key_name}.pem".
Now I have a question how to skip this provisioning on subsequent runs?
I don't want to store SSH key in the repository.
I could create an empty file to "trick" terraform, but I don't like this solution.
Any better ideas?
Instead of doing provisioning in the aws_instance block, move it out to a null_resource block, with appropriate triggers.
resource "aws_instance" "cluster" {
count = 3
# ...
}
resource "null_resource" "cluster" {
# Changes to any instance of the cluster requires re-provisioning
triggers {
cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}"
}
connection {
host = "${element(aws_instance.cluster.*.public_ip, 0)}"
}
provisioner "remote-exec" {
inline = [something]
}
}
If your triggers do not change the null_resource provisioning will not be triggered on subsequent runs.
Sparrowform is a lightweight provisioner for Terraform based infrastructure. The benefits against other provision tools, is that stage of terraform apply which does infrastructure bootstrap is decoupled from provision stage, so you may do this:
$ terraform apply # does infra bootstrap
$ nano sparrowfile # Sparrowdo equivalent for remote-exec chunk
#!/usr/bin/env perl6
bash 'apt -y update';
package-install 'python';
$ sparrowform --ssh_user=my-user --ssh_private_key=/path/to/key # do provision stage
Obviously you are free not to run sparrowform in subsequent runs. It does it's job (install ansible related dependencies, that is it). Then you drop your initial ssh_private_key and go with new private key ( ansible related I guess ?)
PS. disclosure - I am the tool author