I am trying to find a simple solution to the following problem. I have 2 microservices in AWS behind VPN on machines with a static IP (which won't change) behind VPN (so it's visible by another AWS instances in the same security group) and then I have another microservice on GCP (Kubernetes), which needs to access these (basically for aa very simple and very occasional HTTP POST requests). What would be the easiest way to do so? I was thinking about specifying IP addresses of my Kubernetes pool instances to inbound rules in the AWS security group for those two microservices, but that is dangerous because of the dynamic nature of these...
I found some solutions using tunnels and cetera, but most of the guides were either outdated or doesn't suite to my needs. They e.g. require to create a new VPC, while I want to reuse the existing one. I am sure it's the way, but seems as a huge overkill to me. Couldn't I e.g. somehow leverage Ingress or some simple proxy container?
Thanks!
I solved it by using two proxies.
Related
i have more than 30 production Windows severs in all AWS regions. I would like to connect all servers from one base bastion host. can any one please let me know which one is good choice? How can i setup one bastion host to communicate all servers which is different regions and different VPC's? Kindly anyone give advice for this?
First of all, I would question what are you trying to achieve with a single bastion design? For example, if all you want is to execute automation commands or patches it would be significantly more efficient (and cheaper) to use AWS System Manager Run Commands or AWS System Manager Patch Manager respectively. With AWS System Manager you are getting a managed service that offers advance management capabilities with highest security principles built-in. Additionally, with SSM almost all access permissions could be regulated via IAM permission policies.
Still, if you need to set-up bastion host for some other purpose not supported by SSM, the answer includes several steps that you need to do.
Firstly, since you are dealing with multiple VPCs (across regions), one way to connect them all and access them from you bastion's VPC would be to set-up a Inter-Region Transit Gateway. However, among other, you would need to make sure that your VPC (and Subnet) CIDR ranges are not overlapping. Otherwise, it will be impossible to properly arrange routing tables.
Secondly, you need to arrange that access from your bastion is allowed in the inbound connections of your target's security group. Since, you are dealing with peered VPCs you will need to properly allow your inbound access based on CIDR ranges.
Finally, you need to decide how you will secure access to your Windows Bastion host. Like with almost all use-cases relying on Amazon EC2 instances, I would stress to keep all the instances in private subnets. From private subnets you can always reach the internet via NAT Gateways (or NAT Instances) and stay protected from unauthorized external access attempts. Therefore, if your Bastion is in private subnet you could use the capability of SSM to establish a port-forwarding session to your local machine. In this way, you enable yourself the connection while even your bastion is secured in private subnet.
Overall, this answer to your question involves a lot of complexity and components that will definitely incur charges to your AWS account. So, it would be wise to consider what practical problem are you trying to solve (not shared in the question)? Afterwards, you could evaluate if there is an applicable managed service like SSM that is already provided by AWS. In the end, from a security perspective, granting access to all instances from a single bastion might not be best practice. If you consider scenarios in which you bastion is compromised for whatever reason, you basically compromised all of your instances across all of the regions.
Hope it gives you slightly better understanding of your potential solution.
How can we restrict outbound traffic from AWS VPC to the internet, for example limiting outbound traffic to certain trusted domains (URL “whitelisting”).
I was thinking on AWS WAF but it seems it filter trrafic traffic traveling to the web application not from web application.
Any thoughts, suggestions, Thanks in advance.
It seems to be that you‘re looking for a proxy solution. As I know there aren‘t any managed proxy AWS services offered yet but you can use cloudformation, terraform or similar to setup it your own way with open source solutions f.e..
There is a good blog post on AWS about exactly your issue: https://aws.amazon.com/de/blogs/security/how-to-set-up-an-outbound-vpc-proxy-with-domain-whitelisting-and-content-filtering/
Maybe there is something useful for you on AWS Marketplace:
https://aws.amazon.com/marketplace/search/results?x=0&y=0&searchTerms=Proxy
The simplest and easiest way is to implement an Aviatrix FQDN egress filter. It just serves the purpose from a centralized user interface to discover then whitelist/blacklist the URLs/FQDN in every VPC.
Proxy implementation could become complex, esp. when you have to manage it seprately in every VPC. and doesn't provide centralized control, every VPC has to be managed separately.
The easiest way is to get an Aviatrix launch partner like SDxWORx, enable it with discounted PAYG pricing.
https://aws.amazon.com/marketplace/pp/prodview-laruhupdkcpuy/
I'm looking for the best way to get access to a service running in container in ECS cluster "A" from another container running in ECS cluster "B".
I don't want to make any ports public.
Currently I found a way to have it working in the same VPC - by adding security group of instance of cluster "B" to inbound rule of security group of cluster "A", after that services from cluster "A" are available in containers running in "B" by 'private ip address'.
But that requires this security rule to be added (which is not convenient) and won't work for different regions. Maybe there's better solution which covers both cases - same VPC and region and different VPCs and regions?
The most flexible solution for your problem is to rely on some kind of service discovery. The AWS-native one would be using Route 53 Service Registry or AWS Cloud Map. The latter one is newer and also the one recommended in the docs. Checkout these two links:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html
https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/
You could go for open source solutions like Consul.
All this could be overkill if you just need to link two individual containers. In this case you could create a small script that could be deployed as a Lambda that queries the AWS API and retrieves the target info.
Edit: Since you want to expose multiple ports on the same service you could also use load balancer and declare multiple target groups for your service. This way you could communicate between containers via the load balancer. Notice that this can lead to increased costs because traffic goes through the lb.
Here is an answer that talks about this approach: https://stackoverflow.com/a/57778058/7391331
To avoid adding custom security rules, you could simply perform some VPC peering between regions, which should allow instances in VPC 1 from Region A, view instances in VPC 2 from Region B. This document describes how such connectivity may be established. The same document provides references on how to link VPCs in the same region as well.
I'm new to aws ecs service and would like to know about the security inside ecs service.
I'm creating an ecs task which includes two docker container (A and B). A spring-boot application is running on container B and works as a gateway to the backend services. No login/security is necessary to access this app from container A .. so I can invoke like http://localhost:8080/middleware/ ... and then one of the servlet generates a saml token and invoke backend services by adding this token as authorization header. All looks good and works fine. However, a couple developers indicated this design has a flaw. "Even if ecs service running in SecurityGroup and only an entry point port is open, it is possible for hacker to install malwares onto ec2 instance on which two containers are running, and this malware can invoke the spring-boot app running in container B which is a security breach"
I'm not sure if what I heard from my co-workers is true? The security in aws is not strong enough to communicate using localhost without security between containers?? If anyone tells me about this, it would be very appreciated!!
Security and Compliance is a shared responsibility between AWS and the customer .
In general, AWS is responsible for the security of the overall infrastructure of the cloud, and the customer is responsible for the security of the application, instances and of their data.
For a service like ECS, it is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and related management tasks.
As the customer you would normally secure an EC2 type ECS load by hardening the instance, use proper security groups, implement VPC security features eg NACLS and private subnets, use least privilege IAM users/roles, while also applying Docker security best practices to secure the containers and images.
Note: Docker itself is a complicated system, and there is no one trick you can use to maintain Docker container security. Rather you have to think broadly about ways to secure your Docker containers, and harden your container environment at multiple levels, including the instance itself. Doing this is the only way to ensure that you can have all the benefits of Docker, without leaving yourself at risk of major security issues.
Some answers to your specific questions and comments:
it is possible for hacker to install malwares onto ec2 instance on which two containers are running, and this malware
If hackers have penetrated your instance and have installed malware, then you have a major security flaw at the instance level, not at the container level. Harden and secure your instances to insure your perimeter is protected. This is the customer's responsibility.
The security in aws is not strong enough to communicate using localhost without security between containers?
AWS infrastructure is secure and compliant and maintains certified compliance with security standards like PCI and HIPPA. You don't need to worry about security at the infrastructure level for this reason, that is AWS responsibility.
No login/security is necessary to access this app from container A .. so I can invoke like http://localhost:8080/middleware
This is certainly not ideal security, and again it is customer responsibility to secure such application endpoints. You should consider implementing basic authentication here - this can be implemented by virtually any web or app server. You could also implement IP whitelisting so API calls can only be accepted from the container A network subnet.
For more information on ECS security see
Security in Amazon Elastic Container Service
For more information on AWS infrastructure security see Amazon Web Services: Overview of Security Processes
Yes, your colleagues observation is correct.
There is very good possibility of such hacks. But, AWS does provide many different ways in which you can secure your own servers and containers.
Using nested security groups in Public Subnet
In this scenario, AWS allows port access to particular security group rather than an IP address / CIDR range. So only resources having particular security group nested can access those ports while no one from outside can access them.
Using Virtual Private Cloud
In this scenario host your all the instances and ecs containers in private subnet and allow only the access to particular port via NAT gateway for public access, in such scenario your instance won't be directly vulnerable to attacks.
Right now our small-ish business has 3 clients who we have assigned to 3 elastic IPs in Amazon Web Services (AWS).
If we restart an instance no one loses access because the IPs are the same after restart.
Is there a way to handle expanding to 3 more clients without having things fall apart if there's a restart?
I'm trying to request more IPs, but they suggest it depends on our architecture, and I'm not sure what architecture they're looking for (or why some would warrant more elastic IPs than others or if this is an unchecked suggestion box).
I realize this is a very basic question, but googling around only gets me uninformative docs from the vendors mouth.
EDIT:
There is a lot of content on the interwebs (mostly old) about AWS supporting IPv6, but that functionality appears to be deprecated.
You can request more EIPs in the short run. Up to 5 EIP is free depending on your account. You should also considering using name based URLs and assign each of your clients to a subdomain, for example,
clientA.example.com
clientB.example.com
clientC.example.com
This way you will not be needing an additional IP for every client you add. Depending on your traffic, one EC2 instance can serve many clients, and as you scale, you can put multiple EC2 instances behind an AWS Elastic Load Balaner, and this will scale to serve exponentially more clients.
If the client wants to keep their servers separate and can pay for them, you can purchase EIP as many as you need. You should also consider separating database into one database instance for each client, which is probably what clients desire more than separation of IPs.
For IPv6, a quick workaround would be to use a front-end ELB that supports both IPv6 and IPv4.
If you use elastic IPs from VPC, you get 5 per region for an AWS account. See Amazon VPC Limits.
So, you can go to console and select VPC. Then click on elastic IPs, create it. Once created, assign it to a relevant instance.
So, atleast for now, you can solve the problem if you are not bothered about region.