AWS Vpn routing to multiple subnets - amazon-web-services

We have a VPN setup with two static routes
10.254.18.0/24
10.254.19.0/24
We have a problem that we can only ever communicate from AWS - to one of the above blocks at a time. At some times it is .18 and at other times it is .19 - I cannot figure out what is the trigger.
I never have any problem communicating from either of my local subnets out to aws at the same time.
Kinda stuck here. Any suggestions?
What have we tried? Well the 'firewall' guys said they dont see anything being blocked. But I read another post here that stated the same thing and the problem still ended up being the firewall.
Throughout the course of playing with this the "good" subnet has flipped 3 times. Meaning
Right now I can talk to .19 but not .18
10 min ago I could talk to .18 but not .19
It just keeps flipping.

We've been able to get this resolved. We changed the static routes configured in AWS from:
10.254.18.0/24
10.254.19.0/24
To use instead:
10.254.18.0/23
This will encompass all the addresses we need and has resolved the issue. Here was Amazon's response:
Hello,
Thank you for contacting AWS support. I can understand you have issues
with reaching your two subnets: 10.254.18.0/24 and 10.254.19.0/24 at
the same time from AWS.
I am pretty sure I know why this is happening. On AWS, we can accept
only one SA (security association) pair. On your firewall, the
"firewall" guys must have configured policy based VPN. In policy/ACL
based VPN, if you create following policys for eg: 1) source
10.254.18.0/24 and destination "VPC CIDR" 2) source 10.254.19.0/24 and destination "VPC CIDR"
OR 1) source "10.254.18.0/24, 10.254.19.0/24" and destination "VPC CIDR"
In both the cases, you will form 2 SA pairs as we have two different
source mentioned in the policy/ACL. You just have to use source as
"ANY" or "10.254.0.0/16" or "10.254.0.0/25", etc. We would prefer if
you can use source as "ANY" then micro-manage the traffic using
VPN-filters if you are using Cisco ASA device. How to use VPN-filters
is given in the configuration file for CISCO ASA. If you are using
some other device then you will have to find a solution accordingly.
If your device supports route based VPN then I would advice you to
configure route based VPN. Route based VPNs always create only one SA
pair.
Once you find a solution to create only one ACL/Policy on your
firewall, you will be able to reach both the networks at the same
time. I can see multiple SA formation on your VPN. This is the reason
why you cannot reach both the subnets at the same time.
If you have any additional questions feel free to update the case and
we will respond to them.

Related

Let AWS EKS node to access RDS

I would like to let my AWS EKS node to communicate with AWS RDS. Both of them are in the same subscription and region so no need to implement any sci-fi architectures. Just a simple one would be enough.
I started to investigate and I found a couple of stackoverflow threads.
This is the first idea where the Security Groups for Pods is "implemented". This is not my case. I'm happy to share the RDS with all the whole nodes. Am I wrong?
This is the second idea (actually in the same thread) where they suggest to put all the different resources (RDS and EKS) in the same VPC (shared?). Is it a good idea?
And finally here the VPC Peering Connection is suggested as a good solution. Is it really a good solution? I can see here the announcement which stands that: "all data transfer over a VPC Peering connection that stays within an Availability Zone (AZ) is now free". This is good, but looks like an enterprise solution for a simple problem.
Can you help me here in choosing a good solution which can properly fit my scenario? Can I set a proper IAM/Roles instead?

What's the correct way to execute multiple Google Cloud Functions with different outgoing IPs?

I intend to use Google Cloud Functions to access an API. My goal is to use several functions, each with a different IP address. I would be distributing processing across several functions simultaneously that would then each interact with the target API. As I understand it, there's the possibility that the execution of two separate functions could be take place on the same machine - meaning requests would originate from the IP. In order to respect the rate limits, I need to know how many requests will be going through each IP address and therefore need to ensure that each function is executing with a separate IP.
I'm new to Google Cloud Functions, but I've made some progress. Currently, I have a function function-1. This function is using connector-1 and passing all egress traffic through my default VPC network. I followed the guide provided by Google Cloud for associating a static IP with my function. As a result, I now have router-1 which is connected with my NAT gateway nat-1. Finally, nat-1 has a static IP associated with it.
At this point, any execution of function-1 is using the static IP as expected. However, I'm still trying to understand the proper way of structuring this. I have a few questions on the matter:
Do I have to duplicate every link in the chain for each function that requires its own IP address?
Am I able to re-use some of these items? For example, perhaps all functions can use the same VPC network?
Is there a better way to structure things to meet my requirements assuming I needed 10 or 20 functions using different IPs?
The answers:
I'm not sure what you mean with "duplicate every link in the chain", but if you want to enforce each CF to have a static IP address you will have to follow the steps you shared.
Yes, you could re-use the VPC network and attach a new serverless VPC connector. Even in the same region.
If you want to force a different static IP for each CF, no, you need to follow these steps.
As a tip, you can use gcloud compute networks vpc-access connectors create to kind of automatize the connectors creation. It may be useful if you have to create many because it's faster than using the Console.
If this limitation does not suit your scenario you should wonder whether this is the appropriate product for you.

How to measure speed from AWS regions to specific location (not mine)?

I'm looking for a way to pick the best AWS region to host a Proof of Concept installation for a potential customer in India.
For this, I'd like to try to ping the customer's web site (I verified that it's hosted in India, I assume by the customer itself since that's part of their business) from multiple AWS regions and see which one gives best results.
I found multiple tools which would allow me to run ping from my own browser to multiple AWS locations (e.g. https://cloudharmony.com/speedtest, http://www.cloudping.info/) but none which will allow me to ping between all AWS regions and a specific third party.
Does such a tool exist, or is my only option to run up an EC2 instance in each region and try to ping from it?
You might want to check the answers to this very similar question.
Keep in mind that not all regions have all AWS services available at this time, so make sure the region you pick has all the services that you plan to use. Also, Amazon has said that an India region is in the works.

Create basic AWS CloufFormation Template for single server

I have no experience with AWS CloudFormation Templates so I apologize for the incredibly simple question which I can't find an answer to because I think it is so basic.
I am trying to create a cloudformation template for a single server in AWS Test Drive. Here is the criteria:
Deploy AMI
Force m3-large (no other sizes available)
Will be running in a single location (no other location available)
Utilize existing security group
Get a public IP Spit back the public DNS or public IP address
Everything I've looked up wants to be more complex than I think I need and I can't figure out which pieces are needed and which ones can be taken out. What is the bare minimum to deploy a single ami with no customization (all customization is performed inside the VM during bootup. There should also be no options for other data center locations or other sizes. All templates I've seen have a bunch of options for multiple data centers and multiple sizes and sets up a security group.
I appreciate the links to the AWS site however I have already been there and this is one of the templates that has too much info and I don't know what I can change\exclude.
Thanks for your help.
Amazon Web Services documentation includes a single-server CloudFormation template that simply creates a Linux EC2 instance and accompanying security group. This one is based in US West 2 (Oregon), but does not appear to be region-specific and should work in any region.
https://s3-us-west-2.amazonaws.com/cloudformation-templates-us-west-2/EC2InstanceWithSecurityGroupSample.template
This sample can be found along with others here:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/sample-templates-services-us-west-2.html

How to manage and connect to dynamic IPs of EC2 instances?

When writing a web app with Django or such, what's the best way to connect to dynamic EC2 instances, such as a cluster of Redis or memcache instances? IP addresses change between reboots, etc. Elastic IPs are limited to 5 by default - what are some other options for auto-discovering/auto-updating which machines are available?
Late answer, but use Boto: http://boto.cloudhackers.com/en/latest/index.html
You can use security groups, tags, and other means to hit the EC2 API and pick the instances/IPs for each thing (DB Server, caching server, etc.) at load-time. We do this with great success in deployment, and are moving that way with our Django settings.py, as well.
One method that I heard mentioned recently in an AWS webinar was to store this sort of information in SimpleDB. Essentially, you would use SimpleDB as the central configuration location, and each instance that you launch would register its IP etc. with this configuration, so you would always have a complete description of all of your instances in one place. I haven't seen this in practice so I don't know what the best practices would be exactly, but the idea sounds reasonable. I suppose you could use SNS or something to signal all the other instances whenever the configuration changes, so everyone could refresh their in-memory cache of the configuration.
I don't know the AWS administrative APIs yet really, but there's probably an API call to list your EC2 instances, at which point you could use some sort of custom protocol to ping each of them and ask it what it is -- part of the memcache cluster, Redis, etc.
I'm having a similar problem and didn't found a solution yet because we also need to map Load Balancers addresses.
For your problem, there are two good alternatives:
If you are not using EC2 micro instances or load balancers, you should definitely use Amazon Virtual Private Cloud, because it lets you control instances IPs and routing tables (check all limitations before using this service).
If you are only using EC2 instances, you could write a script that uses the EC2 API tools to run the command ec2-describe-instances to find all instances and their public/private IPs. Then, the script could parameterize instances names to hosts and update /etc/hosts. Finally, you should put the script in the crontab of every computer/instance that need to access the EC2 instances (see ec2-describe-instances).
If you want to stay with EC2 instances (I'm in the same boat, I've read that you can do such things with their VPC or use an S3 bucket or something like that.) but with EC2, I'm in the middle of writing stuff like this...it's all really simple up till the part where you need to contact the server with a server from your data center or something. The way I'm doing it currently is using the API to create the instance and start it...then once its ready, I contact the server to execute a powershell script that I have on the server....the powershell renames the computer and reboots it...that takes care of needing the hostname and MAC for our data center firewalls. I haven't found a way yet to remotely rename a computer.
As far as knowing the IP, the elastic IPs are the way to go. They say you're only allowed 5 and gotta apply for more but we've been regularly requesting more and they give em to us..we're up to like 15 now and they haven't complained yet.
Another option if you dont' want to do all the computer renaming and such...you could use DHCP and set your computer up so when it boots it gets the computer name and everything from DHCP....I'm not sure how to do this exactly, I've come across very smart people telling me that's the way to do it during my research for Amazon.
I would definitely recommend that you get into the Amazon API...I've been working with it for less than a month and I can do all kinds of crazy things. My code can detect areas of our system that are getting stressed, spin up 10 amazon servers all configured to act as whatever needs stress relief, and be ready to send jobs to all in less than 7 minutes. Brings a tear to my eye.
The documentation is very complete...the API itself is a work of art and a joy to program against...I've very much enjoyed working with it. (and no, i dont' work for them lol)
Do it the traditional way: with DNS. This is what it was built for, so use it! When a machine boots, have it ask for the domain name(s) related to its function, and use that for your configuration. If it stops responding, re-resolve the DNS (or just do that periodically anyway).
I think route53 and the elastic load balancing stuff can be used to do this, if you want to stick to Amazon solutions.