Is it possible to create multiple domains in single Amazon VPC (Virtual Private Cloud) created in Amazon AWS. ?
The domain-name and domain-name-servers are part of an DHCP Options Sets and a single VPC can only have one DHCP Option set at a time, as visible in the AWS Management Console and e.g. documented for ec2-associate-dhcp-options:
After you associate the options with the VPC, any existing instances
and all new instances that you launch in that VPC use the options. [...]
However, if your use case allows, you could create additional VPCs instead - by default you can create 5 VPCs per region, see Amazon VPC Limits.
Assuming you mean multiple IP's as well, the answer to this question is here.
Quoting:
You can now create and attach an additional network interface, known
as an elastic network interface (ENI), to any Amazon EC2 instance in
your VPC for a total of two network interfaces per instance. More
information here
http://aws.typepad.com/aws/2011/12/new-elastic-network-interfaces-in-the-virtual-private-cloud.html
Yours is a good question if for instance, you are trying to broker various web-services.
Related
I have lightsail instances in multiple regions.
I want to allow Instance_1 in Region_1 to be able to communicate with a custom aws vpc from that region.
I understand that each lightsail instance is an independent vps (virtual private server).
Is it correct to say that- when vpc peering is enabled (under account settings), then all the lightsail instances in the region get access to the default vpc of the region?
Is there any way to enable it only for 1 lightsail instance?
Assuming a region has multiple vpc's (say a default vpc and an additional vpc), then is there any way to enable vpc peering to the non default aws vpc?
No.
VPC Peering in Amazon Lightsail only permits connection to the Default VPC in a Region.
It also looks like all resources would be included in the peering relationship.
If you need better control, you would need to use Amazon EC2 instead of Amazon Lightsail.
(I suspect that these limitations are intentional, to encourage people with more requirements to use Amazon EC2. Amazon Lightsail is marketed as a 'starter' product with a lower price and therefore less functionality.)
I'm looking for the best way to get access to a service running in container in ECS cluster "A" from another container running in ECS cluster "B".
I don't want to make any ports public.
Currently I found a way to have it working in the same VPC - by adding security group of instance of cluster "B" to inbound rule of security group of cluster "A", after that services from cluster "A" are available in containers running in "B" by 'private ip address'.
But that requires this security rule to be added (which is not convenient) and won't work for different regions. Maybe there's better solution which covers both cases - same VPC and region and different VPCs and regions?
The most flexible solution for your problem is to rely on some kind of service discovery. The AWS-native one would be using Route 53 Service Registry or AWS Cloud Map. The latter one is newer and also the one recommended in the docs. Checkout these two links:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html
https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/
You could go for open source solutions like Consul.
All this could be overkill if you just need to link two individual containers. In this case you could create a small script that could be deployed as a Lambda that queries the AWS API and retrieves the target info.
Edit: Since you want to expose multiple ports on the same service you could also use load balancer and declare multiple target groups for your service. This way you could communicate between containers via the load balancer. Notice that this can lead to increased costs because traffic goes through the lb.
Here is an answer that talks about this approach: https://stackoverflow.com/a/57778058/7391331
To avoid adding custom security rules, you could simply perform some VPC peering between regions, which should allow instances in VPC 1 from Region A, view instances in VPC 2 from Region B. This document describes how such connectivity may be established. The same document provides references on how to link VPCs in the same region as well.
Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.
currently working with two environments/account.
Dev and staging
We are paling to spin up a new instance to install Jenkins for CI/CD in dev environment.
We are also wondering if we can use the same instance which is in dev as a CI/CD for staging account as well.
How will access work?
How can the CI/CD instance access the instances in stating for CI/CD?
Do we need to set up a cross-account role for this which allowed dev CI/CD to access the stating instances?
or
the private key is enough to have access to EC2 irrespective of account?
You can definitely enable this. Take a look at VPC peering.
This features enables 2 VPCs whether different account or different region, to connect to each other as there networks become connected via a tunnel between.
When you implement this the following factors are important:
No cross over of CIDR ranges within VPCs
The VPC peering connection must be added to the route table(s) in both VPCs allowing them to know how to connect to the other VPC.
You will need to whitelist in security groups to allow access fro the instances that you want to be able to connect.
By doing this you also benefit from any network connections traversing the AWS backbone rather than across the public internet which will lead to improvements for security and performance.
When creating an EC2 instance (or some other kind of stuff) on AWS, there appears a default VPC.
Also, as another option, a VPC can be created beforehand and selected during the EC2 instance creation etc..
So, in which use cases should we create a new VPC instead of using the default one?
The AWS Documentation does a pretty good job describing how they create the default VPC.
When we create a default VPC, we do the following to set it up for
you:
Create a VPC with a size /16 IPv4 CIDR block (172.31.0.0/16). This provides up to 65,536 private IPv4 addresses.
Create a size /20 default subnet in each Availability Zone. This provides up to 4,096 addresses per subnet, a few of which are reserved
for our use.
Create an internet gateway and connect it to your default VPC.
Create a main route table for your default VPC with a rule that sends all IPv4 traffic destined for the internet to the internet
gateway.
Create a default security group and associate it with your default VPC.
Create a default network access control list (ACL) and associate it with your default VPC.
Associate the default DHCP options set for your AWS account with your default VPC.
This is great with simple applications and proof of concepts, but not for productions deployments. A DB instance should for example not be publicly available, and should there for be placed in a private subnet, something the default VPC does not have. You would then create some sort of backend instance that would connect to the DB and expose a REST interface to the public.
Another reason may be if you are running multiple environments (DEV, QA, PROD) that are all copies of each other. In this case you would want them to be totally isolated from each other as to not risk a bad deployment in the DEV environment to accidentally affect the PROD environment.
The list can go on with other reasons, and there are probably some that are better than those that I have presented you with today.
If you understand VPC reasonably well, then you should always create your own VPC. This is because you have full control over the structure.
I suspect that the primary reason for providing a Default VPC is so that people can launch an Amazon EC2 instance without having to understand VPCs first.