For example, in my python code inside lambda I have the following string:
conexao = (
"Driver={/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.1.so.2.1};"
"Server=example.xxxxxxxxxxxxx.us-east-1.rds.amazonaws.com;"
"Database=example;"
"TrustServerCertificate=yes;"
"Uid=admin;"
"Pwd=xxxxxx;"
)
The endpoint of my RDS database is example.xxxxxxxxxxxxx.us-east-1.rds.amazonaws.com
Both RDS database and lambda database is in the same VPC and subnet.
Since it's in the same VPC and subnet, this mean I am not going over the internet right? or maybe it is? my RDS has the option Publicly accessible = Yes (because I need to access through my SSMS).
I'd like to access over the internet only in SSMS, when accessing thought lambda I'd like to use privatly.
Does AWS understand that, in this case, python code inside lambda don't need to access over the internet?
this mean I am not going over the internet right?
Yes, that's correct.
You can very easily check it yourself if you want. In your lambda function, resolve example.xxxxxxxxxxxxx.us-east-1.rds.amazonaws.com into an IP address. What you should get, is a private IP address of the RDS instance, not public.
Related
I am trying to access Timestream from EC2/Lambda instances that run within a VPC so that I can speak to a RDS instance from those EC2 instances/Lambda functions. I have spent many hours trying to get access to Timestream via PrivateLink/a VPC instance endpoint to work and think I may have found an issue. When I provision a VPC endpoint for the Timestream ingest service, the Private DNS name is specific to the cell endpoint, e.g. ingest-cell2.timestream.us-east-1.amazonaws.com NOT the general endpoint URL that boto3 uses, i.e. ingest.timestream.us-east-1.com. When I run a nslookup on ingest-cell2.timestream.us-east-1.amazonaws.com it properly resolves to the private IP of the VPC endpoint ENI, but if I lookup the more general endpoint URL of ingest.timestream.us-east-1.com it continues to resolve to public AWS IPs. The result of this is that if I initialize the timestream write client normally and perform any actions, it hangs because it is trying to communicate with a public IP from a private subnet,
import boto3
ts = boto3.client('timestream-write')
ts.meta.endpoint_url # https://ingest.timestream.us-east-1.amazonaws.com
ts.describe_endpoints() # hangs
ts.describe_database(DatabaseName='dbName') # hangs
If I explicitly give it the cell specific endpoint URL, the describe_endpoints() function throws an error but seemingly normal functions work (haven't tested writes or reads yet, just describing databses)
import boto3
ts = boto3.client('timestream-write', endpoint_url='https://ingest-cell2.timestream.us-east-1.amazonaws.com')
ts.describe_endpoints() # throws UnknwonOperationException error
ts.describe_databse(DatabaseName='dbName') # Succeeds
If I provision a NAT gateway in the private subnet rather than a VPC endpoint everything works normally as expected. Furthermore for fun, I tried adding the VPC endpoint private IP to the /etc/hosts file with ingest.timestream.us-east-1.com to force proper resolution and even then I get the same hanging behavior when running the above block of code
This seems pretty broken to me. The whole point of the VPC endpoint is to enable the SDK to operate normally. Maybe I am missing something?
I'm looking at https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html trying to work out what IP address ranges are used by AWS Lambda but in the linked JSON file I don't see any references to the Lambda service. Does AWS Lambda just use EC2 under the hood and are those the IP address ranges I should be looking at?
The only official answer I can find is on the official AWS forum (from 2015) is:
Unfortunately Lambda does not have a fixed set of IP addresses which it uses.
VPC support, which is in our roadmap, should allow you to control the public IP addresses in use by your function through the use of an EC2 NAT.
As far as I can tell, if you need to control/know the source IP of outgoing requests from your Lambda function, the official answer is still to put it in your VPC and use NAT.
Another idea would be to make a request in your non-VPC Lambda function and see what IP address you see. Then try to find it in the ip-ranges.json file and use the block of whatever service it turns out it is using currently. Just take into account that this may not work forever.
The IP addresses will vary.
If you need a fixed IP on AWS for a Lambda function you can attach an "Elastic Network Interface". The Lambda function will then use this interface inside a VPC which can have a fixed IP-address.
I hope I was able to abstract your needs from the original question.
More informations can be found here.
Hope that helps!
Dominik
I have 2 instances in google cloud engine. let say:
a-instance and b-instance, I want to make a-instance not have access in public but the b-instance can access a-instance. for example:
The b-instance want to access a-instance in below:
curl ip-external-a-instance:7713/v1/healthcheck
For right now, the a-instance still can access public. How to make a-instance private access?
better remove the external IP from instance A and access it through it's private IP.
see the documentation for Securely Connecting to VM Instances, below bastion hosts
(which would be instance B).
tip: always make a snapshot, before messing around with the SSHd configuration.
You need to create firewall rules to do something like that. If you wish for the internet to not be able to access your VM but the b-instance to do it, you should create the firewall rule only allowing b-instance to access a-instance.
We are trying to configure a VPC, which has a private subnet and a public subnet. In the private subnet there is an RDS which is not publicly accessible. We have test it and seems that works fine! The issue though its that when I ping the RDS endpoint from my computer it returns the Private IP of the RDS (its not returns any packets though).
We do not want to shows the Private IP.
Any help would be appreciated!
I went ahead and popped open a chat with our AWS support team to pick their brain. Basically, this boils down to how they host their DNS mappings for RDS endpoints; they're created in a public hosted zone by default (not modifiable). Hence, you can resolve your RDS endpoint over the internet (because the mapping is hosted publicly), but can't actually route any data to it.
If this is an issue, to get around it you can ... jump through some hoops:
An alternative will be to create a private hosted zone with a record
that points to the rds endpoint. (for example a private hosted zone
"xxxx.com" that has an alias record pointing to rds endpoint), in which case you will reach out to your rds instance
using xxxxx.com
However, this doesn't actually disable the original AWS created endpoint from returning the private IP, it just allows you to configure an endpoint that doesn't.
For what it's worth, revealing your private IP is pretty harmless; several thousand devices likely share your exact private IP. The only way this information would be concerning for you is if an attacker was actually in your network - and at that point... they could just do a lookup on the DNS from there to get the IP.
First question: why do you want to do this? Your 10.1.2.3 or 172.31.2.3 or whatever is a non-routable address. It really doesn't matter whether people know it if they can't get into your VPC.
As for actually preventing it, you can't: Amazon makes the endpoint available via DNS (you can use nslookup to find it). You could always try filing a support ticket, but I wouldn't expect any results.
Also, FYI the second component of the endpoint is related to your account. So in your image you redacted non-important information but left the (potentially) important information present.
In case it's not clear, the problem is in how Amazon resolves DNS requests, not in how the networks are connected. Here's an example of an nslookup call for one of our database instances that's running on a private subnet. This is from my PC, not connected to the VPC via VPN or any other means:
> nslookup REDACTED.REDACTED.us-east-1.rds.amazonaws.com
Server: 127.0.1.1
Address: 127.0.1.1#53
Non-authoritative answer:
Name: REDACTED.REDACTED.us-east-1.rds.amazonaws.com
Address: 10.1.56.119
I started a cluster in aws following the guides and then went about following the guestbook. The problem I have is accessing it externally. I set the PublicIP to the ec2 publicIP and then use the ip to access it in the browser with port 8000 as specified in the guide.
Nothing showed. To make sure it was actually the service that wasn't showing anything I then removed the service and set a host port to be 8000. When I went to the ec2 instance IP I could access it correctly. So it seems there is a problem with my setup or something. The one thing I can think of is, I am inside a VPC with an internet gateway. I didn't add any of my json files I used, because they are almost exactly the same as the guestbook example with a few changes to allow my ec2 PublicIP, and a few changes for the VPC.
On AWS you have to use your PRIVATE ip address with Kubernetes' services, since your instance is not aware of its public ip. The NAT-ing on amazon's side is done in such a way that your service will be accessible using this configuration.
Update: please note that the possibility to set the public IP of a service explicitly was removed in the v1 API, so this issue is not relevant anymore.
Please check the following documentation page for workarounds: https://kubernetes.io/docs/user-guide/services/