How can I dynamically capture the ec2 instance name on which my Chef recipe is running?
#coderanger I am using below code
Ohai.plugin(:EC2) do
provides "ec2"
depends "ec2" collect_data do
instance_id = ec2['instance_id']
end
end
How to print the instance id here ?
Assuming you mean the EC2 instance ID, you can find it in node['ec2']['instance_id'] if the EC2 ohai plugin has been activated. If the instance is created via knife ec2 server create this is done automatically for you, and there is an imperfect auto-enable that tries to guess if you're on EC2. If neither of these are the case, you can force it by creating an empty file in /etc/chef/ohai/hints/ec2.json.
Related
I can't find a way to specify a user-data after creating ECS instance definition.
Document says You can pass this user data into the Amazon EC2 launch wizard in Step 6.g of Launching an Amazon ECS Container Instance.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html#multi-part_user_data
ECS is launched automatically, how do you specify the user data?
I want to send /var/log/syslog to cloudwatch and I need to add user data (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_cloudwatch_logs.html)
I had to add the user data as a autoscaling group property
steps are
copy existing launch configuration
edit user data of the launch configuration
edit autoscaling group to use the created launch configuration
terminate ecs instances so that the modified autoscaling group launches new ec2 with new launch configuration
via terraform we can pass it as template file within launch config
data "template_file" "user_data" {
template = "${file("${path.module}/templates/user_data.sh")}"
vars = {
ecs_config = "${var.ecs_config}"
ecs_logging = "${var.ecs_logging}"
cluster_name = "${var.cluster}"
env_name = "${var.environment}"
custom_userdata = "${var.custom_userdata}"
cloudwatch_prefix = "${var.cloudwatch_prefix}"
}
By default, user data scripts and cloud-init directives run only during the first boot cycle when an EC2 instance is launched.
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
In the article it also explain further possible workaround.
I have a instance in aws and private key file is authorized keys which is stored in .ssh path.When auto scaling AMI is launched then that file should copy to new server in the same location i.e .ssh how can we do it by using Cloud formation template.
what code or what commands should I keep in CFN template.
Thanks
you can create a new ami with your private key stored and use the ami in your CloudFormation template.
steps:
1. launch new instance from the current ami you are using
2. ssh to your instance and copy the private key file
3. stop the instance in aws console and right click the instance -> image -> create image
4. use your new ami as EcsAmiId
You can specify user data script in your CF template where you are describing your EC2 instance. user data script is just a normal bash script that will be executed when the instance boots up, which means that you can automate any such task as copying files using it, which you would otherwise needed to do manually.
You can also pre-bake a custom AMI, or in other words, create a new AMI that has all the settings already in place and use that AMI in your CF template instead of whatever default AMI you are using right now.
A Keypair can be created within the EC2 console, or the public half of an existing Keypair can be uploaded to EC2.
Once this is done, an Amazon EC2 instance can be launched with a reference to this Keypair. Software on the instance (if using an Amazon Linux AMI) will automatically copy the public half of the nominated keypair to the /home/ec2-user/.ssh/authorized_keys file.
This applies for launching an EC2 instance via any method, eg console, API, CloudFormation.
This is much easier than trying to manipulate the authorized_keys file yourself via User Data.
I have a batch job which is trigged with boto3, I run about 10 workers on an ec2 compute enviroment and they used to inherit the jobname from batch but now all my instances have no name tag so it's difficult to figure out which is which.
How can I pass on a name tag to the ec2 instances for my batch job via boto3?
batch = boto3.client('batch', my_arguments)
batch_detils = batch.submit_job(
jobName = 'worker_{}'.format(i),
jobQueue = 'my_queue',
jobDefinition = 'my_job_definition',
containerOverrides = {'command': ['sleep 100']},
)
I should specify the ultimate goal is to get each worker with a unique name, not to get the all the same name as the compute environment.
The Compute Environment Parameters - AWS Batch documentation page talks of a tags parameter:
Key-value pair tags to be applied to instances that are launched in the compute environment. For example, you can specify "Name": "AWS Batch Instance - C4OnDemand" as a tag so that each instance in your compute environment has that name. This is helpful for recognizing your AWS Batch instances in the Amazon EC2 console.
Have you tried this
Boto3 create_tags
I am trying to set up an ALB using Terraform and a spot instance, for a non-prod development workspace. The spot instance is created, but upon attempting to use the instance in the aws_alb_arget_group_attachment, I receive an error:
* aws_alb_target_group_attachment.ui_servers: Error registering targets with target group: InvalidTarget: The following targets are not in a running state and cannot be registered: '[id]'
status code: 400, request id: [id]
This persists even if I add a depends_on directive to the attachment:
depends_on = ["data.aws_instance.workspace_gz"]
If I re-run the terraform apply, it works, so it really is just a lifecycle problem. How can I instruct the attachment to wait until the instance is healthy?
You don't. What you ought to do is create the spot instances within an autoscaling group for the spot instances and attach the ASG to the target group.
The problem I am trying to solve is how to make my code running within an EC2 instance which is part of a load balanced AWS cluster aware of how many other EC2 instances are withing the same cluster/loadbalancer.
I have the following code which when given the name of a LoadBalancer can tell me how many EC2 instances are associated with that Loadbalancer.
DescribeLoadBalancersResult dlbr = loadBalancingClient.describeLoadBalancers();
List<LoadBalancerDescription> lbds = dlbr.getLoadBalancerDescriptions();
for( LoadBalancerDescription lbd : lbds )
{
if( lbd.getDNSName().equalsIgnoreCase("MyLoadBalancer"))
{
System.out.println(lbd.getDNSName() + " has " + lbd.getInstances().size() + " instances") ;
}
}
which works fine and prints out the loadbalancer name and number of instances is has associated with it.
However I want to see if I can get this info without having to provide the Loadbalancer name. In our setup an EC2 instance will only ever be associated with one Loadbalancer so is there any way to go back the way from EC2 instance to Loadbalancer?
I figure I can go down the route of getting all loadbalancers from All regions, iterating through them until I find the one that contains my EC2 instance but I figured there might be an easier way?
An interesting challenge -- I would have to wrangle with the code myself to think this through, but my gut first response would be to use the AWS CLI here, and to just invoke it from within your Java/C#.
You can make this call:
aws elb describe-load-balancers
And get all manner of information about any and all ELBs, and could simply --query filter that by the instance ID of the instance making the call anyway -- in order to find out what other friends the instance has joined to its same ELB. Just call the internal instance metadata to get that ID:
http://169.254.169.254/latest/meta-data/instance-id
Or another fun way to go would be to bootstrap your instance AMIs so that when they are spawned and joined to an ELB, they register themselves in a SimpleDB or DynamoDB table. We do this all the time as a way of keeping current inventories of websites, or software installed, etc. So this way you would have a list, which you could then keep trimmed by checking for "running" status.
EDIT - 4/13/2015
#MayoMan I have hadto make use of this as well in some current work -- to identify healthy instances attached to an ELB in an auto-scaling group and then act upon them. I've found 'jq' to be a really helpful command-line tool. You could also make these calls directly to an ELB, but here it's describing an ASG:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names <ASG Name> | jq -r .AutoScalingGroups[0].Instances[0].HealthStatus
Or to list the InstanceIds themselves:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names <ASG Name> | jq -r .AutoScalingGroups[0].Instances[0-3].InstanceId