At the moment I want to introduce some external firewall solution for kubernetes within the AWS.
I'm using kops to help build the production environment. It’s a pretty good framework
However, I’m new to the AWS network structure and kubernetes is also a new thing for me.
What I want to do is setup a firewall for all requests come to the services within the kubernetes.
And if someone hacked a container within the kubernetes, he or she cannot attack any other containers in the cluster. Any idea or suggestion?
For general Kubernetes restricting actions at a network level can be done (assuming you're on 1.7) via Network Policies.
In addition to that if you're concerned about malicious containers in your cluster, I'd recommend reviewing the CIS Kubernetes standard to make sure you've locked down your cluster as, out of the box there appear to be some concerns with kops.
OK I finally figured out a solution. At the beginning, I try to use Fortinet Gate with kops. But it's not working and causing a lot of issues...it seems that the change of route table will have some conflict with kops. Anyway, it's not a good idea to reconnect subnets and firewall instances regarding kops. Later we switched to Deep security. All good. The only issue is kops doesn't support custom launch config at the moment. I hope this can help anyone who want to setup security env on kubernetes.
Related
I'm interested in putting a vendor provided application running in an AWS EC2 Instance behind my Istio gateway. It sounds like the ideal scenario is to use a WorkloadEntry to define the endpoint and make it easy to flex should I ever get this into the cluster, etc.
In the documentation I've read, there is mention of using a sidecar in the VM to enable this. What I've failed to find is how to use a sidecar in a VM. There's lots of good stuff about sidecars in a pod, but I'm not sure what it takes to implement on the VM and how I would even go about doing that. Maybe the integration needed for the sidecar would be to complex to implement in a 3rd party app? Maybe I can do this better without a Sidecar?
How do I find details on VM Sidecars and getting them integrated into the mesh?
When do you decide between implementing this as a WorkloadEntry vs simply a MESH_EXTERNAL ServiceEntry?
If you want to integrate a VM into your k8s Istio environment, you need to setup Istio on your VM :
https://istio.io/latest/docs/ops/deployment/vm-architecture/
I'm trying to find a way to relay real requests of production to test environment.
And I've found this.
https://aws.amazon.com/ko/blogs/networking-and-content-delivery/mirror-production-traffic-to-test-environment-with-vpc-traffic-mirroring/
However, I'm using ECS not EC2, and they comprise of ALB and ECS. So I'm wondering if this 'Traffic Mirroring' works with ECS or not, and how to.
Any help would be appreciated.
Leaving answer for someone who might have question same with me.
After spent several hours, I could set up Traffic Mirroring from ecs to ecs.
However the automation for various port could be pretty tricky so I decided not to use traffic mirroring for traffic relay from prod to test.
I chose to use request log instead of Traffic Mirroring feature.
I am running a machine learning algorithm that needs to run in our own environment due to being using local cameras. We have deployed a non-managed Kubernetes cluster using an EC2 instance and we connect the Worker Nodes to the master using a VPN. The problem with that is that it is not scalable for the way I want it to scale, and honestly, it is a bit of a hassle to deploy a new node.
I was wondering how I can deploy on-premise nodes to EKS or any other suggestion that would make our life easier.
Well, having on-prem nodes connected to master in Amazon is a wild idea. Nodes should report to master frequently and failure to do so due to Internet hiccups may hurt you badly. I mean, dude, that's really bad idea even if now it goes nice. You should consider installing master locally.
But anyway, how do you connect nodes to master? Every node has its own VPN connection? How many masters you have? Generally, you should set up AWS VPN connection between your VPC and local subnet using IPSec. In that case there is permanent tunnel between the subnets and adding more nodes becomes trivial task depending on how you deployed everything. At least that's how it seems to me.
I'm trying DCOS to setup a spark/mesos cluster.
I deployed the mesos cluster on AWS, and everything went smoothly, except that the cluster is put in a dedicated VPC almost inaccessible from anywhere.
The rest of my apps are in another VPC (default one), how am I supposed to access the services hosted on from there ?
I tried to setup a VPC peering, with routes, and new rules in security groups, but I'm stuck, and I don't feel I'm in the right direction.
Did you setup a dcos cluster via the Mesosphere site? In that case I would actually recommend to use the chat button on the lower left of the DCOS UI.
Otherwise -if I understand your problem correctly- you should have a look at this tutorial in order to make applications available to the public. A general overview of the security model can be found here.
So basically there are two options:
Start your tasks on public nodes (by setting acceptedResourceRoles": ["slave_public"])
Add an Edge Router making the tasks running on private slaves available to the outside.
For more details check the above link.
We're considering to implement an ELB in our production Amazon environment. It seems it will require that production server instances be synched by a nightly script. Also, there is a Solr search engine which will need to replicated and maintained for each paired server. There's also the issue of debugging - which server is it going to? If there's a crash, do you have to search both logs? If a production app isn't behaving, how do you isolate which one is is, or do you just deploy debugging code to both instances?
We aren't having issues with response time or server load. This seems like added complexity in exchange for a limited upside. It seems like it may be overkill to me. Thoughts?
You're enumerating the problems that arise when you need high availability :)
You need to consider how critical is the availability of the service and take that into account when defining what is the right solution or just over-engineering :)
Solutions to some caveats:
To avoid nightly syncs: Use an EC2 with NFS server and mount share in both EC2 instances. (Or use Amazon EFS when it's available)
Debugging problem: You can configure the EC2 instances behind the ELB to have public IPs, limited in the Security Groups just to the PCs of the developers, and when debugging point your /etc/hosts (or Windows equivalent) to one particular server.
Logs: store the logs in S3 (or in the NFS server commented above)