We have recently analyzed our AWS data transfers/NAT gateway charges and what we have observed is that we are sending 80% of the traffic to an AMAZON service in this IP address range.
{
"ip_prefix": "3.237.107.0/25",
"region": "us-east-1",
"service": "AMAZON",
"network_border_group": "us-east-1"
}
When looking at AWS public IP address range in this link
https://ip-ranges.amazonaws.com/ip-ranges.json
This just mentions the above range as "AMAZON" but doesn't give many details about it.
Anyway can we know which AWS service falls under this IP address category?
Tried looking at S3/Dynamo DB/RDS/Elastic Cache and other AWS services we were using and this doesn't fall under those services.
Related
My team is using AWS on on-premise server at clients HQ.
Client is asking IP adresses to set networking restrictions (outbound).
I found that AWS provides currently used ip ranges for their services.
For example:
S3 in eu-west-1 region has 8 ip prefixes
Rabbit MQ (mq.eu-west-1...) - seems doesn't have any specific ip ranges?
We could send ip prefixes from json file but I see few problems:
How to avoid ip ranges changes and clients network reconfiguration each time?
List of used ip ranges for each service is quite large
I'm having a stupid idea that maybe there is a way to have one ip for each AWS service? For example route all S3 traffic (by using other AWS services) through one endpoint?
Additionally, how to find Amazon MQ ip ranges? In the end we're using something like this: amqps://{username}:{password}#{url}.mq.eu-west-1.amazonaws.com:5671
Thanks in advance!
EDIT:
RabbitMQ broker seems to have static IP
After enabling network policy logging on a vpc-native cluster, it turned out that some suspicious ICMP traffic is blocked.
According to the log json payload, the Internet ICMP traffic is somehow reaching pods (including those which are not exposed by any service or ingress). Example log below:
"src": {
"instance": "redacted_public_ip"
},
"node_name": "redacted_node_name",
"count": 1,
"disposition": "deny",
"dest": {
"workload_name": "redacted_workload_name",
"workload_kind": "ReplicaSet",
"pod_namespace": "redacted_pod_namespace",
"namespace": "redacted_namespace",
"pod_name": "redacted_pod_name"
},
"connection": {
"protocol": "icmp",
"dest_ip": "redacted_private_pod_ip",
"direction": "ingress",
"src_ip": "redacted_public_ip"
}
There are multiple entries like the one above, and public IPs are owned by multiple different organisations and located in different countries. What might be the next step with investigating this issue?
Simply block ICMP unless you really need it. There are two basic types of ICMP, one used for routing and the other for ping-pong messages. You do not need either one enabled.
The next tip is that there is nothing to investigate. The public Internet will poke and prod every public IP address non-stop. Otherwise, you will need to deploy a firewall and blocklists to block known bad actors.
I have an application deployed on Cloud Run. It runs behind an HTTPS Load balancer.
I want it to be able to cache some data using memory store service. I basically followed the documentation to use a serverless vpc connector but this exception keeps poping:
Unable to create a new session key. It is likely that the cache is
unavailable.
I am guessing that my cloud run service can't access memorystore.
On Django I have:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": f"redis://{CHANNEL_REDIS_HOST}:{CHANNEL_REDIS_PORT}/16",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"IGNORE_EXCEPTIONS": True,
},
"KEY_PREFIX": "api"
}
}
where CHANNEL_REDIS_HOST is the IP from my memorystore primary endpoint and CHANNEL_REDIS_PORT is the port.
When I run this command:
gcloud redis instances describe instance_name --region region --format "value(authorizedNetwork)"
it returns projects/my_project/global/networks/default.
Then, on the VPC network, I clicked on 'default' and then on 'ADD SUBNET'. I created my subnet with IP Address range 10.0.0.0/28. Maybe the problem comes from this step as I do not get a lot about this all IP Communication thing..
When I run this command:
gcloud compute networks subnets describe my subnet
purpose is PRIVATE as intended and network is https://www.googleapis.com/compute/v1/projects/my_project/global/networks/default.
So I think that my memorystore instance and my subnet are able to connect.
Then, I created a serverless VPC connector, using the same region, the default network and the subnet I just created.
Finally on my service I set the VPC connector to the one I just created and I redeploy using Only route requests to private IPs through the VPC connector option, if I choose Route all traffic through the VPC connector my deployment fails, I think probably because I am behind a load balancer, anyway I do not want to route all traffic to my connector.
And after doing this, I still receive the error mention at the beginning of the message..
Any ideas ?
Thanks
So I think my issue was using the db 16. As the maximum number of database on memorystore is 16 it must be from 0 to 15. Changing it make it works.
Do EC2 instances change the IP address for your instance each time you stop/start the instance? Is there a way to keep the IP address constant?
Yes, there is a way: Elastic IP Addressing.
AWS instances are launched with a dynamic IP address by default, which means that the IP address changes every time the server is stopped and restarted. In many cases, this is not desired and so, users also have the option to assign the server a static IP address (also known as an “elastic IP”).3
According to Amazon 1, 2:
An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.
And:
You can have one Elastic IP (EIP) address associated with a running instance at no charge. If you associate additional EIPs with that instance, you will be charged for each additional EIP associated with that instance per hour on a pro rata basis. Additional EIPs are only available in Amazon VPC.
To configure a static IP address:
Log in to the AWS EC2 Dashboard. If required, use the region selector
in the top right corner to switch to the region where your instance was launched.
Select the instance in the dashboard.
In the left navigation bar, select the “Network & Security -> Elastic IPs” menu item.
Click the “Allocate New Address” button.
For more details on setting it up, see Allocating an Elastic IP Address
and Configure a static IP address.
There is a charge a small hourly charge if an Elastic IP address is not associated with a running instance, or if it is associated with a stopped instance or an unattached network interface. The charge is prorated and depends on the region; details can be found on Amazon EC2 Pricing.
Get an EIP (Elastic IP) which is free when the instance is running but paid a bit when the instance is stopped.
https://docs.aws.amazon.com/ko_kr/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html
Elastic IP has its limitations.
If you have reached the maximum number of Elastic IP addresses in a region, and all you want is a constant way to connect to an EC2 instance, I would recommend using a route53 record instead of using IP address.
I create a route53 record that points to the IP address of my EC2 instance. The record doesn't get changed when the EC2 is stopped.
And the way to keep the record pointing to the address of the EC2 is by running a script that changes the route53 record when the EC2 launches.
Here's the user data of my EC2:
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
# get the public ip address
# Ref: https://stackoverflow.com/questions/38679346/get-public-ip-address-on-current-ec2-instance
export public_ip=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
cat <<EOF > input.json
{
"Comment": "optional comment about the changes in this change batch request",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "my-domain.my-company.com",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": "${public_ip}"
}
]
}
}
]
}
EOF
# change route53 record
/usr/bin/aws route53 change-resource-record-sets \
--hosted-zone-id <hosted_zone_of_my-company.con> \
--change-batch file://input.json >
--//
Here I use my-domain.my-company.com as the route53 record for my EC2.
By using this method, you get a route53 record that points to your EC2 instance. And the record does not change when you stop and start the EC2. So you can always use the route53 record to connect to your EC2.
Remember to assign an IAM role that has route53 permissions to the EC2 instance so that you can run the user data without errors.
And remember that the user data I provided is intended for use with Amazon Linux 2, and the commands may not work for other Linux distributions.
we have an application that runs on different client systems which sends data to Amazon Kinesis Data firehose. But the client has firewall which restricts outbound traffic only to whitelisted IP addresses and does not allow domain names in their firewall regulation. I am not that familiar with aws but read that the amazon IP keeps changing. Because of this we are having problem to whitelist the IP address in the client firewall.
I came across following pages tha mentions that aws public IP address ranges available in JSON Format.
https://aws.amazon.com/blogs/aws/aws-ip-ranges-json/
https://ip-ranges.amazonaws.com/ip-ranges.json
It's a huge list and multiple entries for the same region. can you suggest a way to somehow extract IP range that our service will use so that we can whitelist them in the client's firewall? Any other alternative is also welcomed.
Thanks in advance for any help and/or suggestions.
Firehose has regional endpoints that are listed on this page:
https://docs.aws.amazon.com/general/latest/gr/rande.html
Using the us-east-2 endpoint as an example...
Right now, firehose.us-east-2.amazonaws.com resolves, for me, to 52.95.17.2 which currently features in the ip-ranges.json document as:
service: AMAZON region: us-east-2 ip_prefix: 52.95.16.0/21
If you wanted to know which ranges to whitelist on the firewall, you'd need to get all of the ranges for AMAZON in us-east-2 (currently 34 if you include IPv6 addresses). Note: That assumes all of the endpoints fall under the AMAZON service marker and you'll be whitelisting far more services than just firehose if you whitelisted that.
Previous contact with AWS support suggests that ranges can be added without warning, so you'd need to frequently check the published ranges and update the firewall to avoid a situation where the endpoint resolved to a new IP address that wasn't whitelisted.
If you did want to go the route of frequent checks and whitelisting, then a python script like this could be used to retrieve the relevant IP ranges:
#!/usr/bin/env python
import requests
aws_response = requests.get('https://ip-ranges.amazonaws.com/ip-ranges.json')
response_json = aws_response.json()
for prefix in response_json.get('prefixes'):
if prefix.get('service') == 'AMAZON' and prefix.get('region') == 'us-east-2':
print(prefix.get('ip_prefix'))