Currently there is a cluster of 3 rabbitmq nodes, with each one in a different aws zones than any of the others. I have been able to cluster them and prove that is working.
The challenge I am running into is using autoscaling and automatically connecting new nodes to the cluster. From what I understand all of the other nodes must know about each other and have the other's IP's in their /etc/hosts file.
Is there a way to just search for the cluster name and tell the nodes to connect?
I have tried setting nodes behind ELBs and pointing them to the ELBs instead of IP's; however, that did not to work.
for anyone else who is trying this the work around I did was to instal aws-cli. After doing that I wrote a script to change the /etc/hosts file based on the three different elb's I had (one for each node)
The script first finds that the ec2 instance behind the elb by running this command
aws elb describe-instance-health --load-balancer-name <NAME OF YOUR ELB>
The script then gets the ip of the ec2 behind the elb by running this command
aws ec2 describe-instances --instance-ids <THE VALUE RETURNED FROM ABOVE> --query 'Reservations[*].Instances[*].PrivateIpAddress' --output text
The trick that we are doing is running this in a script where it goes over each elb individually and there is only one elb infront of each node. This allows for a node to go down, be autoscalled back up and us to know the new ip without having to worry about manually edditing the file.
Thus we are just running over each one and telling it to save the output to the /etc/hosts file.
Hope that helps someone
Related
I am using a YAML file in the CloudFormation service on AWS. This file creates an EC2 server with a public IP and a stack output within the CloudFormation using the IP of the EC2 server that it just created. The output however, is not something we use a lot. Therefore I would like to close the server from time to time and open it again whenever I need it. The problem is that every time I launch the EC2 server again, it changes its public IP and the IP of the stack output doesn't change with it.
I found a way to make it static, using an elastic IP address from the EC2 service. However, I can't seem to find a way to select that IP address when choosing properties in creating the stack. So I would like some assistance on this.
You cannot define the IP address yourself, but you can extract it after it has been generated.
In your Cloudformation template, add an Outputs section like the following:
Outputs:
myEc2IP: # just some identifier
Value: !GetAtt myInstance.PublicIp # assuming that "myInstance" is your resource
Then, after deploying your stack, you can use the AWS CLI to extract the value:
aws cloudformation describe-stacks --stack-name $YOUR_STACK \
--query 'Stacks[0].Outputs[?OutputKey==`myEc2IP`].OutputValue' \
--output text
You can even load this into a shell variable by something like
export MY_ROLE_ARN="$(aws cloudformation describe-stacks …)"
Learn more about Outputs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html
See other possible output/return values for EC2 instances in Cloudformation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html (in the “Return values” section).
I can't seem to find a way to select that IP address when choosing properties in creating the stack. So I would like some assistance on this.
You can't select the IP address that you will get. Elastic IP addresses are chosen from Amazon's pool of IP addresses.
If you want to specify a specific IP address that you own, you can use Bring your own IP addresses (BYOIP) in Amazon EC2. That will allow you to specify the IP that you own to be used by your EC2 instances.
I want to add the servers which are behind the AWS Auto Scaling Group to the Nginx configuration file , I see with Nginx plus there is an agent nginx-asg-sync which we can use directly and it will do the work .
Is there any possibility that we can use the same in Nginx open source service ? , I am using Nginx open source and I am not finding a way to come up from this issue
Thanks
in AWS you only need to know how CLI/API works.
you can build this agent using only two cli commands:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names {PARAMS}
where {PARAMS} you query auto scaling group name and get instances IDs from it.
the second command is:
aws ec2 describe-instances --instance-ids {PARAMS}
then all you have to do is to build all the logic around this, for example in bash script you create nginx upstream template, and everytime new instance was launched you compare ip addresses and swap upstreams and reload nginx. or you can simply add/delete ip with sed
here is more eamples how you can do this:
https://serverfault.com/questions/704806/how-to-get-autoscaling-group-instances-ip-adresses
also you can add health check before changing upstreams.
I need to provide my customers with fixed urls that don't change when the EC2 instances are stopped/started because sometimes we need to change the size of the EC2 and when we restart the instance the public IP has changed.
I thought on using Elastic IPs so I can keep the same public IP when the instance is rebooted, but I've seen that Amazon tells you that you only have 5 Elastic IPs. If you ask them they say that they can give you more, but I guess they're not giving you 10.000 of them.
How can I use a single public Elastic IP to give each user different URLs for out service?
It would be something like this, being 11.22.33.44 the Elastic IP and 192.168.0.X two EC2 instances:
11.22.33.44:**1000** --> 192.168.0.**1**:22
11.22.33.44:**1001** --> 192.168.0.**1**:80
11.22.33.44:**1002** --> 192.168.0.**1**:443
11.22.33.44:**1003** --> 192.168.0.**2**:22
11.22.33.44:**1004** --> 192.168.0.**2**:80
11.22.33.44:**1005** --> 192.168.0.**2**:443
I need to make it work programmatically, as I'm creating EC2 instances from the SDK as needed.
Another way I thought is using subdomains from my .com domain that points to the current public IP of each EC2 instance, but using the IP as I described before sounds better.
The issue is that instances are receiving new (temporary) Public IP addresses after they are stopped and started.
A simple way to handle this is to add a script to each instance that runs during every boot. This script can update a DNS record to point it at the instance.
The script should go into the /var/lib/cloud/scripts/per-boot directory, which will cause Cloud-Init to automatically run the script each time the instance is started.
# Set these values based on your Route 53 Record Sets
ZONE_ID=Z3NAOAOAABC1XY
RECORD_SET=my-domain.com
# Extract information about the Instance
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id/)
AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone/)
MY_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4/)
# Extract Name tag associated with instance
NAME_TAG=$(aws ec2 describe-tags --region ${AZ::-1} --filters "Name=resource-id,Values=${INSTANCE_ID}" --query 'Tags[?Key==`Name`].Value' --output text)
# Update Route 53 Record Set based on the Name tag to the current Public IP address of the Instance
aws route53 change-resource-record-sets --hosted-zone-id $ZONE_ID --change-batch '{"Changes":[{"Action":"UPSERT","ResourceRecordSet":{"Name":"'$NAME_TAG.$RECORD_SET'","Type":"A","TTL":300,"ResourceRecords":[{"Value":"'$MY_IP'"}]}}]}'
The script will extract the Name tag of the instance and update the corresponding Record Set in Route 53. (Feel free to change this to use a different Tag.) The instance will also require IAM permissions for ec2 describe-tags and route53 change-resource-record-sets.
Update: I've turned this answer into a blog post: Amazon Route 53: How to automatically update IP addresses without using Elastic IPs
I'm trying to deploy Cassandra on AWS ECS as docker containers.
Single nodes are easily done, but now I'm trying to make a cluster for it.
Cassandra needs fixed ip addresses, at least for the seed nodes, which needs to be passed to all nodes in a cluster.
Cassandra cannot work with ELB addresses, because the ELB name resolves to a different ip as the docker host itself.
So basically I need to be able to force AWS to deploy an image to a specific instance/host/ip. In that way I can pass the correct configuration while running the docker image.
Could I use the RunTask api and pass it PlacementConstraint giving a constraint to limit the hosts to a single one, based on IP? Is PrivateIp an attribute of an EC2 instance in this interface?
Do you have any other good ideas how I can achieve that?
Thanks!
You can use hostnames in the seeds list. Just make sure that your seeds will use those names. Also, if a seed stops and resolves to another IP you'll need to replace it (but that's true for any node that changes its IP)
If seed ecs containers are added using awsvpc network mode then each task get their own eni. After you launch seed nodes you can use 'aws describe-tasks' API to get their ip address and update your Cassandra.yml accordingly
I am using chef to create amazon EC2 instances inside a VPC. I have alloted an elastic IP to new instance using --associate-eip option in knife ec2 server create. How do I bootstrap it without a gateway machine? It gets stuck at "Waiting for sshd" as it uses the private IP of newly created server to ssh into it, though it has an elastic IP allocated?
Am I missing anything? Here is the command I used.
bundle exec knife ec2 server create --subnet <subnet> --security-group-ids
<security_group> --associate-eip <EIP> --no-host-key-verify --ssh-key <keypair>
--ssh-user ubuntu --run-list "<role_list>"
--image ami-59590830 --flavor m1.large --availability-zone us-east-1b
--environment staging --ebs-size 10 --ebs-no-delete-on-term --template-file
<bootstrap_file> --verbose
Is there any other work-around/patch to solve this issue?
Thanks in advance
I finally got around the issue by using the --server-connect-attribute option, which is supposed to be used along with a --ssh-gateway attribute.
Add --server-connect-attribute public_ip_address to above knife ec2 create server command, which will make knife use public_ip_address of your server.
Note: This hack works using knife-ec2 (0.6.4). Refer def ssh_connect_host here
Chef will always use the private IP while registering the EC2 nodes. You can get this working by having your chef server inside the VPC as well. Definitely not a best practice.
The other workaround is, let your chef server be out side of VPC. Instead of bootstrapping the instance using knife ec2 command follow the instructions over here.
This way you will bootstrap your node from the node itself and not from the Chef-server/workstation.