I am using a YAML file in the CloudFormation service on AWS. This file creates an EC2 server with a public IP and a stack output within the CloudFormation using the IP of the EC2 server that it just created. The output however, is not something we use a lot. Therefore I would like to close the server from time to time and open it again whenever I need it. The problem is that every time I launch the EC2 server again, it changes its public IP and the IP of the stack output doesn't change with it.
I found a way to make it static, using an elastic IP address from the EC2 service. However, I can't seem to find a way to select that IP address when choosing properties in creating the stack. So I would like some assistance on this.
You cannot define the IP address yourself, but you can extract it after it has been generated.
In your Cloudformation template, add an Outputs section like the following:
Outputs:
myEc2IP: # just some identifier
Value: !GetAtt myInstance.PublicIp # assuming that "myInstance" is your resource
Then, after deploying your stack, you can use the AWS CLI to extract the value:
aws cloudformation describe-stacks --stack-name $YOUR_STACK \
--query 'Stacks[0].Outputs[?OutputKey==`myEc2IP`].OutputValue' \
--output text
You can even load this into a shell variable by something like
export MY_ROLE_ARN="$(aws cloudformation describe-stacks …)"
Learn more about Outputs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html
See other possible output/return values for EC2 instances in Cloudformation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html (in the “Return values” section).
I can't seem to find a way to select that IP address when choosing properties in creating the stack. So I would like some assistance on this.
You can't select the IP address that you will get. Elastic IP addresses are chosen from Amazon's pool of IP addresses.
If you want to specify a specific IP address that you own, you can use Bring your own IP addresses (BYOIP) in Amazon EC2. That will allow you to specify the IP that you own to be used by your EC2 instances.
Related
I thought this was going to be easy but unfortunately, I was wrong. I just made an AWS-hosted Grafana workspace. I'd like to query an AWS RDS instance for some data.
I am struggling to find out how I would add the Hosted Grafana instance into a security group so it would be allowed to access the RDS.
I did check the Docs!
Has anyone done this before that could help me out?
Thanks!
Ran into a similar problem, AWS Team told me that if your database is sitting in a non-default VPC and is publically accessible, then you have to whitelist IP address in your security group based on your region of managed grafana.
Here is the list of ip addresses based on the region.
• us-east-1: 35.170.12.166 54.88.16.229 3.234.162.252 54.160.119.132
54.196.72.13 3.213.190.135 54.83.225.191 3.234.173.51 107.22.41.194
• eu-central-1: 18.185.12.232, 3.69.106.181, 52.29.127.210
• us-west-2: 44.230.70.68, 34.208.176.166, 35.82.14.62
• us-east-2: 18.116.131.87, 18.117.203.54
• eu-west-1: 52.30.158.152, 54.247.159.227, 54.170.69.237, 52.210.87.10,
54.73.6.128, b54.78.34.200, 54.216.218.40, 176.34.91.249, 34.246.52.247
• us-east-2: 35.170.12.166, 54.88.16.229, 3.234.162.252, 54.160.119.132,
54.196.72.13, 3.213.190.135, 54.83.225.191, 3.234.173.51, 107.22.41.194
You can refer the documentation provided by aws on how to connect to the database at:
AMG Postgresql Connection
I had to do the same thing, and in the end the only way I could find out the IP address was to look through the VPC flow logs to see what was hitting the IP address of the RDS instance.
AWS has many IP addresses it can use for this and unfortunately there is no way to assign a specific IP address or security group to grafana.
So you need to set up a few things to get it to work, and there is no guarantee that the IP address for your AWS hosted Grafana won't change on you.
If you don't have it already, set up a VPC for your AWS infrastructure. Steps 1-3 in this article will set up what you need to do.
Set up Flow Logs for your VPC. These will capture the traffic in and out of the network interfaces and you can filter on the IP address of your RDS instance and the Postgres port. This article explains how to set it up.
Once you capture the IP address you can add it to the security group for the RDS instance.
One thing I have found is that I get regular time outs when querying RDS Postgres from AWS hosted grafana. It works fine, then it doesn't, then it works again. I've not found a to increase the timeout or solve the issue yet.
I need to provide my customers with fixed urls that don't change when the EC2 instances are stopped/started because sometimes we need to change the size of the EC2 and when we restart the instance the public IP has changed.
I thought on using Elastic IPs so I can keep the same public IP when the instance is rebooted, but I've seen that Amazon tells you that you only have 5 Elastic IPs. If you ask them they say that they can give you more, but I guess they're not giving you 10.000 of them.
How can I use a single public Elastic IP to give each user different URLs for out service?
It would be something like this, being 11.22.33.44 the Elastic IP and 192.168.0.X two EC2 instances:
11.22.33.44:**1000** --> 192.168.0.**1**:22
11.22.33.44:**1001** --> 192.168.0.**1**:80
11.22.33.44:**1002** --> 192.168.0.**1**:443
11.22.33.44:**1003** --> 192.168.0.**2**:22
11.22.33.44:**1004** --> 192.168.0.**2**:80
11.22.33.44:**1005** --> 192.168.0.**2**:443
I need to make it work programmatically, as I'm creating EC2 instances from the SDK as needed.
Another way I thought is using subdomains from my .com domain that points to the current public IP of each EC2 instance, but using the IP as I described before sounds better.
The issue is that instances are receiving new (temporary) Public IP addresses after they are stopped and started.
A simple way to handle this is to add a script to each instance that runs during every boot. This script can update a DNS record to point it at the instance.
The script should go into the /var/lib/cloud/scripts/per-boot directory, which will cause Cloud-Init to automatically run the script each time the instance is started.
# Set these values based on your Route 53 Record Sets
ZONE_ID=Z3NAOAOAABC1XY
RECORD_SET=my-domain.com
# Extract information about the Instance
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id/)
AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone/)
MY_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4/)
# Extract Name tag associated with instance
NAME_TAG=$(aws ec2 describe-tags --region ${AZ::-1} --filters "Name=resource-id,Values=${INSTANCE_ID}" --query 'Tags[?Key==`Name`].Value' --output text)
# Update Route 53 Record Set based on the Name tag to the current Public IP address of the Instance
aws route53 change-resource-record-sets --hosted-zone-id $ZONE_ID --change-batch '{"Changes":[{"Action":"UPSERT","ResourceRecordSet":{"Name":"'$NAME_TAG.$RECORD_SET'","Type":"A","TTL":300,"ResourceRecords":[{"Value":"'$MY_IP'"}]}}]}'
The script will extract the Name tag of the instance and update the corresponding Record Set in Route 53. (Feel free to change this to use a different Tag.) The instance will also require IAM permissions for ec2 describe-tags and route53 change-resource-record-sets.
Update: I've turned this answer into a blog post: Amazon Route 53: How to automatically update IP addresses without using Elastic IPs
I'm writing a flask API in pycharm. When I run my code locally, requests using boto3 to get secrets from secrets manager take less than a second. However, when I put my code on an EC2, it takes about 3 minutes (tried in both t2.micro and m5.large).
At first I thought it could be a Python issue, so I ran it in my EC2s through the awscli using:
aws secretsmanager get-secret-value --secret-id secretname
It sill took about 3 minutes. Why does this happen? Shouldn't this in theory be faster in an EC2 than in my local machine?
EDIT: This only happens when the EC2 is inside a VPC different than the default VPC.
After fighting with this same issue on our local machines for almost two months, we finally had some forward progress today.
It turns out the problem is related to IPv6.
If you're using IPv6, then the secrets manager domain will resolve to an IPv6 address. For some reason the cli is unable to make a secure connection using IPv6. After it times out, the cli falls back to IPv4 and then it succeeds.
To verify if you're resolving to an IPv6 address, just ping secretsmanager.us-east-1.amazonaws.com. Don't worry about the ping response, you just want to see the IP address the domain resolves to.
To fix this problem, you now have 3 options:
Figure out your networking issues. This could be something on your machine or router. If in an AWS VPC, check your routing tables and security groups. Make sure you allow outbound IPv6 traffic (::/0).
Reduce the cli connect timeout to make the IPv6 call fail faster. This will make the IPv4 fallback happen sooner. You may want give a better timeout value, but the general idea is to add something like this: --cli-connect-timeout 1
Disable IPv6. You can either disable IPv6 on your machine/router altogether, or you can adjust your machine to prefer IPv4 for this specific address (See: https://superuser.com/questions/436574/ipv4-vs-ipv6-priority-in-windows-7).
Ultimately, option 1 is the real solution, but since it is so broad, the others might be easier.
Hopefully this helps someone else maintain a bit of sanity when they hit this.
I had this issue when working from home through the Cisco AnyConnect VPN client. Apparently it blocks anything IPv6.
The solution for me was to disable IPv6 altogether on my laptop.
To do so for macos:
networksetup -setv6off Wi-Fi # wireless
networksetup -setv6off Ethernet # wired
To re-enable:
networksetup -setv6automatic Wi-Fi # wireless
networksetup -setv6automatic Ethernet # wired
I ran the following commands from my own computer and from an Amazon EC2 t2.nano instance in the ap-southeast-2 region:
aws secretsmanager create-secret --name foo --secret-string 'bar' --region ap-southeast-2
aws secretsmanager get-secret-value --secret-id foo --region ap-southeast-2
aws secretsmanager delete-secret --secret-id foo --region ap-southeast-2
In both cases, each command returned within a second.
Additional:
To test your situation, I did the following (in the Sydney region):
Created a new VPC using the VPC Wizard (just a public subnet)
Launched a new Amazon EC2 instance in the new VPC, with a Role granting permission to access Secrets Manager
Upgraded AWS CLI on the instance (the installed version didn't know about secretsmanager
Ran the above commands
They all returned immediately.
Therefore, the problem lies with something to do with your instances or your VPC.
I made the hotspot from my phone and it worked
I use Ansible for spinning up EC2 instances and deploying services to them. I would like to re-associate an elastic IP which is already associated with an existing EC2 instance to a new instance with as little down time as possible. Do I get it right that this only works in two steps with the Ansible ec2_eip module?
Disassociate from old instance (state absent)
Associate to new one (state present)
There is no way to do it in one step as with the allow-reassociation option of the ec2-associate-address CLI command, right?
I just made a pull request which might help you.
This PR is adding 2 modules : boto3 and boto3_wait. It allows you to call any Boto 3 client operation.
You could use it like so :
- name: Re-associate Elastic IP
boto3:
name: ec2
region: us-east-1
operation: associate_address
parameters:
AllowReassociation: yes
InstanceId: i-xxxxxxxxxxxxxxxxx
AllocationId: eipalloc-xxxxxxxx
If you're interrested by this feature, feel free to vote up on the PR. :)
I created one ELB and attached a few instances to this ELB. So when I login into one of these instance, I would like to type a command or run a nodejs script that can return me the its ELB name, is it possible? I know I can look up on AWS console but I'm looking for a way to look it up programmatically. If possible, I would like to see how it is done in AWS Nodejs SDK
You do not run nodejs on an elb instance. elb is proxy to load balance client requests to your app server where you run nodejs.
You could use the aws command line tools (http://aws.amazon.com/cli/):
aws elb describe-load-balancers
Parse the JSON output for the instance ID (which you can get using this answer: Find out the instance id from within an ec2 machine) and look for whatever ELB it's attached too.