I have a shell script which i wanted to configure in AWS ec2 instance to run every hour. Iam using terraform to launch the ec2 instance. Is it possible to configure the shell script hourly execution through terraform itself while launching the ec2?
Yes, in the aws_instance resource you can use the user_data argument to execute a script at launch that registers a cron job that executes hourly:
resource "aws_instance" "foo" {
ami = "ami-005e54dee72cc1d00" # us-west-2
instance_type = "t2.micro"
...
user_data = <<-EOF
sudo service cron start
echo '0 * * * * date >> ~/somefile' | crontab
EOF
}
Ensure that NTP is configured on the instance and that you are using UTC for the system time.
Helpful links
AWS EC2 Documentation, User Data
POSIX crontab
Terraform AWS provider, EC2 instance user_data
Related
Sorry I am very new to Terraform.
I create AWS instance with Terraform successfully.
Then I power off the instance in AWS web management console.
How to power on the instance with Terraform?
You would have to use local-exec to run AWS CLI's start-instances.
You can use the instance_type value of aws_instance terraform resource.
instance_type : (Optional) The instance type to use for the instance.
Updates to this field will trigger a stop/start of the EC2 instance.
As we are able to display predefined variables aws_instance.my-instance.public_ip values through output variables at the end of the execution of terraform apply.
Similar way, is there a way to output custom information from the new instance at the end such as output a system file any command output such echo hello! or cat /var/log/hello.log?
You can use Terraform Provisioners. They are basically interface to run commands and script to remote machine (or local depending over the provisioner) to achieve some tasks, which in most cases will be bootstrapping matters.
resource "aws_instance" "example" {
ami = "ami-b374d5a5"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo ${aws_instance.example.public_ip} > ip_address.txt"
}
}
You can read more about them here: https://learn.hashicorp.com/terraform/getting-started/provision
However, keep in mind that provisioners are terraform objects and not bound to instances, so they only execute when you use Terraform to spin up or edit instances. These bootstrapping scripts wont come into effect if your instance is created by an ASG during an scale-out operation or by an orchestration tool. For that purpose, using instance's user_data is the best option.
As #SajjadHashmi, you can use local-exec to run commands on your local host with some limitations.
Thus in desperation you can use ssh and scp commands on your local host to get the files from from the instance and execute commands there. This is not a very nice way, but as a measure of the last resort could be considered in some scenarios.
resource "aws_instance" "web" {
# other attributes
provisioner "local-exec" {
command = <<-EOL
# give time to instance to properly boot
sleep 30
# download a /var/log/cloud-init-output.log
# file from the instance to the host's /tmp folder
scp -i private_ssh_key -oStrictHostKeyChecking=no ubuntu#${self.public_ip}:/var/log/cloud-init-output.log /tmp
# execute command ls -a on the instance and save output to
# local file /tmp/output.txt
ssh -i private_ssh_key -oStrictHostKeyChecking=no ubuntu#${self.public_ip} ls -a >> /tmp/output.txt
EOL
}
}
I am creating a ECS cluster (EC2 launch type) using the ecs-cli. I want to run a script to modify vm.max_map_count setting in /etc/sysctl.conf once the EC2 instance is created. At the moment, I am doing it manually by ssh'ing into the instance and running the script as sudo.
Is it possible to run automation script on the EC2 instance created as part of cluster creation? Any reference/documentation will be really helpful.
Thanks
Since you've tagged your question with amazon-cloudformation I assume that you are defining your ECS container instances using CFN.
If so, you can use UserData in your AWS::EC2::Instance to execute commands when the instances are launched:
Running commands on your Linux instance at launch
You are probably already using it to specify cluster name for the ECS agents running on your instances. So probably you already have something similar in your UserData;
echo ECS_CLUSTER=${ClusterName} >> /etc/ecs/ecs.config
echo ECS_BACKEND_HOST= >> /etc/ecs/ecs.config
You can extend the UserData with extra commands that would modify /etc/sysctl.conf.
There are some other possibilities, such as using SSM State Manager to perform actions when your instances launch.
I would like to start a task definition on an instance within my cluster (not in the default one). So something like:
create a cluster
create a task definition with a docker image (I have a docker image
already pushed to ecs)
run the task definition in the cluster
I would like to add a keypair to the ec2 instance for ssh access
I have tried to use these functions form boto3 (ec2, ecs)
create_cluster
run_task
register_container_instance
register_task_definition
run_instances
I managed to run an instance with run_instances, it works perfectly well but I want to run an instance in my cluster. Here is my code:
def run_instances():
response = ec2.run_instances(
BlockDeviceMappings=[
{
'DeviceName': '/dev/xvda',
'Ebs': {
'DeleteOnTermination': True,
'VolumeSize': 8,
'VolumeType': 'gp2'
},
},
],
ImageId='ami-06df494fbd695b854',
InstanceType='m3.medium',
MaxCount=1,
MinCount=1,
Monitoring={
'Enabled': False
})
return response
There is a running instance on ec2 console but it doesn't appear in any of the clusters in the ecs console (I tried it with an ecs-optimized ami and with a regular one).
I also tried to follow these steps for getting my system up and running in a cluster without success:
https://github.com/spulec/moto/blob/master/tests/test_ecs/test_ecs_boto3.py
Could you please help me find out what do I miss? Is there ant other setup have to make beside calling these SDK functions?
Thank you!
You will need to run an instance that uses ECS Optimized AMI since those AMIs have ECS agent preinstalled on them otherwise you would need to install ECS agent yourself and bake a custom AMI.
By default, your ECS optimized instance launches into your default cluster, but you can specify alternative cluster name in UserData property of run_instances function
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
The list of available ECS AMIs is available here
I have an EC2 instance and I need to download a file from its D drive through my program. Currently, it's a very annoying process because I can't access the instance directly from my local machine. The way what I am doing now is running a script on the instance and the instance uploads the file I need to S3 and my program access S3 to read the file.
Just wonder whether there is any simple way to access the drive on the instance instead of going through S3?
I have used AWS DataPipeline and its task runner to execute scripts on a remote instance. The taskrunner waits for a pipeline event published to its worker group.
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-using-task-runner.html
I use it to execute shell script and commands on a schedule. The script to run should be uploaded to S3, and the Data pipeline template specifies the script's path. Works great for periodic tasks. You can do anything you want on the remote box via the script.
You cannot directly download the file from EC2, but via s3( or maybe using scp command) from your remote ec2.
But to simplify this annoying process you can use AWS Systems Manager.
AWS Systems Manager Run Command allows you to remotely and securely run set of commands on EC2 as well on-premise server. Below are high-level steps to achieve this.
Attach Instance IAM role:
The ec2 instance must have IAM role with policy AmazonSSMFullAccess. This role enables the instance to communicate with the Systems Manager API.
Install SSM Agent:
The EC2 instance must have SSM agent installed on it. The SSM Agent process the run command requests & configure the instance as per command.
Execute command :
Example usage via AWS CLI:
Execute the following command to retrieve the services running on the instance. Replace Instance-ID with ec2 instance id.
aws ssm send-command --document-name "AWS-RunShellScript" --comment "listing services" --instance-ids "Instance-ID" --parameters commands="service --status-all" --region us-west-2 --output text
More detailed information: https://www.justdocloud.com/2018/04/01/run-commands-remotely-ec2-instances/