User-data not displaying with each Instance created - amazon-web-services

I have created an AMI from an Instance with the user data
#!/bin/bash
yum update -y
yum install httpd -y
systemctl enable httpd
systemctl start httpd
which starts the Apache server, within I have a index page where Instance details are stored so when I visit the webpage via the Instance IP address it will display this info which it has done so I created an AMI from that Instance and then created a Launch Configuration, Load Balancer and ASG with it attached along with my VPC and public subnets.
I am working towards where it will be a managed Load balancing auto scaling web app that launches Instances based on my Metric which it does but the problem is the 'User Data' is not displaying when new instances are launched and it does not let me connect via the IP Address.
I also have a CPU script that will run my CPU Utilization on my Instance at 100% so it will trigger my metric of if it goes above 60% it will launch new instances every minute or so and i'm also wondering how would I see the load distributed between these Instances to make sure it is distributing equally to all separate Availability Zones? Would there be a log?

Related

AWS ALB Target Group shows unhealthy instances in a custom VPC

I am trying to achieve the following network topology. I want the EC2 instances in private subnets to receive http traffic on port 80 from application load balancer.
For that
I have launched EC2 instances in both the private subnets each. Also, installed apache web server with index.html using the following user data script.
#!/bin/bash
yum update -y
yum install -y httpd.x86_64
systemctl start httpd.service
systemctl enable httpd.service
echo “Hello World from $(hostname -f)” > /var/www/html/index.html
Next, I created ALB in the public subnets. Also, registered EC2 instances with a Target Group while creating the ALB. But health checks for the registered EC2 instances always fail. Please find the image below.
I have double checked security groups for EC2 instances and ALB. Both looks fine to me. Could anyone please let me know what am I missing here ?
thanks

Spark Initial Job Not Accepting Resources Amazon EC2 Standalone Cluster

So I have deployed a standalone cluster to Amazon EC2 using Terraform. It is using passwordless ssh to communicate with workers.
I start the master with the start master script, giving the public ip of the cluster to be the public dns of the ec2 instance.
I then start the slaves using the start-slaves script, having copied over a config/slaves file with the public ip addresses of the 2 EC2 instances that are available to run the workers. (they each have the spark deployment on them in the same location as the master)
In the UI the workers are registered and running:
However, when i submit any job to the cluster, it is never able to allocate resources showing the message:
Does anyone know how to solve this?
The logs show the workers starting and registering correctly, and the task i'm submitting is within the available resources. (have tried as little as 1 cpu core and 500mb).
Does anyone know why the task may not be being accepted?
Cheers

service unable to place a task

Ok, I am lost with where to to even troubleshoot this. I am trying to spin up a stack that has a basic app running in ECS. I will show the cloudformation below. But I keep getting:
service sos-ecs-SosEcsService-1RVB1U5QXTY9S was unable to place a task
because no container instance met all of its requirements. Reason: No
Container Instances were found in your cluster. For more information,
see the Troubleshooting section.
I get 2 EC2 instances up and running but neither appear in the ECS cluster instances.
Here are a few of my theories:
is my user_data correct? do i need to sub the values?
what about the health check
my app is a sinatra app that uses port 4567. am i missing something with that?
Also, I basically started with this, http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-ecs.html and just streamlined it. So here is my current json, https://gist.github.com/kidbrax/388e2c2ae4d622b3ac4806526ec0e502
On a side note, how could I simplify this to take out all autoscaling? Just want to get it working in some form or fashion?
In order for the ECS instance to join the cluster, the following conditions must be met:
The agent must be configured correctly to connect to the correct
cluster via the /etc/ecs/ecs.config file.
The ECS instance must be assigned the correct IAM role to allow the ECS agent to access the ECS endpoints.
The ECS instance must have a connection to the Internet to contact the control plane, either via igw or NAT.
The ECS agent on the ECS Instance should be running.
UserData that should be used to configure /etc/ecs/ecs.config file.
#!/bin/bash
echo ECS_CLUSTER=ClusterName >> /etc/ecs/ecs.config
You can check reason for COntainer Instance not registering with Cluster in /var/log/ecs/ecs-agent.log*
After reading Why can't my ECS service register available EC2 instances with my ELB? I realized the issue was my userdata. The values were not being substituted correctly and so the instances were joining the defualt cluster.
Unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster.
This usually means that your instances booted, but they're not healthy to register to cluster.
Navigate to Load Balancing Target Group of your cluster, then check the following
Health status of the instances in Targets tab.
Attributes in Description tab (the values could be off).
Health checks parameters.
If your instances are terminated, check system logs of terminated instances, and for any errors in your userdata script (check in Launch Configurations).
If the instances running, SSH to it, and verify the following:
The cluster is correctly configured in /etc/ecs/ecs.config.
ECS agent is up and running (docker ps). If it's not, start manually by: start ecs.
Check ECS logs for any errors by: tail -f /var/log/ecs/*.
Related:
terraform-ecs. Registered container instance is showing 0
How do I find the cause of an EC2 autoscaling group "health check" failure? (no load balancer involved)

How to deal with AWS EC2 instance retirement when using Elastic Beanstalk

I have received an email from AWS that states
We have important news about your account (AWS Account ID: XXXXX). EC2
has detected degradation of the underlying hardware hosting your
Amazon EC2 instance (instance-ID: i-XXXX) in the eu-west-1 region. Due
to this degradation, your instance could already be unreachable. After
2017-05-25 10:00 UTC your instance, which has an EBS volume as the
root device, will be stopped.
I'm actually using Elastic Beanstalk with a load balancer with an elastic IP address on what is currently the only instance running (manually associated). In addition I have a reverse DNS for email purposes.
The email continues to say the following...
You may still be able to access the instance. We recommend that you
replace the instance by creating an AMI of your instance and launch a
new instance from the AMI. For more information please see Amazon
Machine Images
(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) in the
EC2 User Guide. In case of difficulties stopping your EBS-backed
instance, please see the Instance FAQ
(http://aws.amazon.com/instance-help/#ebs-stuck-stopping).
So how do I get Elastic Beanstalk to re-provision to new hardware?
Some options seem to be...
rebuild environment
save configuration -> terminate -> load configuration
clone environment -> manually change DNS -> Terminate old environment
'Terminate'environment -> 'Restore terminated environment'?
I'm not sure which variant would restore the environment, in particular it would be ideal if I don't loose the hostname / reverse DNS stuff that was done for email (SNS?) configuration.
It would be nice if I kept all of this (I don't care about the EC2 instance or data - the data is held in MongoDb external to all of this) ...
EC2 configuration (i.e. hardware box size, VM parameters etc)
Security Groups
Load balancer
Elastic IP associated to EC2 (easy enough to do manually after)
Hostname (whatever is required for the reverse DNS)
Thoughts would be appreciated! - It's a shame their email / documentation only discusses EC2 and not beanstalk configurations.
Just terminate the instance and let Elastic Beanstalk automatically spin up a new one. Any changes you are making to your EC2 instances in your beanstalk environment should be done through .ebextensions configuration files (you aren't making changes directly over ssh, right?) so you don't need to worry about "saving" your EC2 setup via creating an AMI.
As for all the items you listed that you need to save, those are all part of the EB environment configuration, not part of the EC2 instance that is being retired.
A load balanced Elastic Beanstalk environment is configured to terminate and create new EC2 instances as needed. There's no need to completely rebuild/replace your entire EB environment just because you need to replace one of the EC2 instances.

Assigning Elastic IP to Auto-Scaled EC2 in VPC - AWS

My goal is to automatically assign an elastic IP to an auto-scaled EC2 instance.
I have done the following:
- Configured EC2 instance w/ startup script to assign IP
- Configured launch config and auto-scale group per spec.
The issue is that when deploying the auto-scaled launch config I lose the ability to allow it to automatically assign a public address (at first) before it picks up the elastic IP assignment.
When I deploy the AMI manually, provided that I check that "assign public IP address" box, the instance will deploy, temporarily assign the xxxx.amazon.xxxx address, then roll over to my elastic IP assignment.
however..
when deployed through the auto-scale command line utilities (as-create-launch-config + as-create-auto-scaling-group) the IP will not work. I feel it could be fixed if there was an option when setting up the launch config to temporarily grab a public IP in order to communicate with the amazon API to pull the elastic assignment.
I greatly appreciate your help!
You may want to use cloud-init to run a command on the local autoscaled server that attaches the EIP. Here is an example of a local command running on a server on first boot: http://cloudinit.readthedocs.org/en/latest/topics/examples.html#run-commands-on-first-boot
In that local command you could use amazon's built in tools to associate the address: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-AssociateAddress.html
In the launch config, add that cloud-init syntax to the user-data attribute as base64 encoded and all future autoscaled instances will do exactly what the cloud-init states it will do.
I usually base64 encode by doing:
base64 <filename>