Adding On-Premise node to EKS - Kubernetes - amazon-web-services

I am running a machine learning algorithm that needs to run in our own environment due to being using local cameras. We have deployed a non-managed Kubernetes cluster using an EC2 instance and we connect the Worker Nodes to the master using a VPN. The problem with that is that it is not scalable for the way I want it to scale, and honestly, it is a bit of a hassle to deploy a new node.
I was wondering how I can deploy on-premise nodes to EKS or any other suggestion that would make our life easier.

Well, having on-prem nodes connected to master in Amazon is a wild idea. Nodes should report to master frequently and failure to do so due to Internet hiccups may hurt you badly. I mean, dude, that's really bad idea even if now it goes nice. You should consider installing master locally.
But anyway, how do you connect nodes to master? Every node has its own VPN connection? How many masters you have? Generally, you should set up AWS VPN connection between your VPC and local subnet using IPSec. In that case there is permanent tunnel between the subnets and adding more nodes becomes trivial task depending on how you deployed everything. At least that's how it seems to me.

Related

Connecting cluster to external DB: Fixed node IP Addresses to be whilisted?

I have a cluster running in DigitalOcean, and need to connect a RDS hosted Database. What I would normally do is just to whitelist the IP addresses of the machines that would be accessing the database in the AWS Security Groups, and then have clear access to the Database.
The thing is that DigitalOcean Kubernetes nodes get recycled every now and then, and the IP changes. I can manually change the IP to allow connection from every node in my cluster, but this doesn't seem like a very solid solution.
How would I go about fixing the IP of the cluster, or maybe making some sort of gateway with a fixed IP that every outgoing connection from the cluster would go through that? I have been studying but I'm completely ignorant in networking, so anything that could help me point to the right direction would be great.
I've seen there are Forward Proxies (Gateways from what I've read), but I couldn't find much information on that, much of my research ends up in API Gateway (like Kong or something), which I understand is the exact opposite of what I need.
Any help?

Setup external firewall network security with kops and aws

At the moment I want to introduce some external firewall solution for kubernetes within the AWS.
I'm using kops to help build the production environment. It’s a pretty good framework
However, I’m new to the AWS network structure and kubernetes is also a new thing for me.
What I want to do is setup a firewall for all requests come to the services within the kubernetes.
And if someone hacked a container within the kubernetes, he or she cannot attack any other containers in the cluster. Any idea or suggestion?
For general Kubernetes restricting actions at a network level can be done (assuming you're on 1.7) via Network Policies.
In addition to that if you're concerned about malicious containers in your cluster, I'd recommend reviewing the CIS Kubernetes standard to make sure you've locked down your cluster as, out of the box there appear to be some concerns with kops.
OK I finally figured out a solution. At the beginning, I try to use Fortinet Gate with kops. But it's not working and causing a lot of issues...it seems that the change of route table will have some conflict with kops. Anyway, it's not a good idea to reconnect subnets and firewall instances regarding kops. Later we switched to Deep security. All good. The only issue is kops doesn't support custom launch config at the moment. I hope this can help anyone who want to setup security env on kubernetes.

Kubernetes - Cross AZ traffic

Does anyone have any advice on how to minimize cross-az traffic for inter-pod communication when running Kubernetes in AWS? I want to keep my microservices pinned to the same availability zone so that microservice-a that resides in az-a will transmit it's payload to microservice-b also in az-a.
I know you can pin pods to a label and keep the traffic in the same AZ, but in addition to minimizing the cross az-traffic I also want to maintain HA by deploying to multiple AZs.
In case you're willing to use alpha features you could use inter-pod affinity or node affinity rules to implement such a behaviour without loosing high availability.
You'll find details in the official documentation
Without that you could just have one deployment pinned to one node and a second deployment pinned to another node and one service which selects pods from both deployments - example code can be found here

Usefulness of Amazon ELB (Elastic Load Balancing

We're considering to implement an ELB in our production Amazon environment. It seems it will require that production server instances be synched by a nightly script. Also, there is a Solr search engine which will need to replicated and maintained for each paired server. There's also the issue of debugging - which server is it going to? If there's a crash, do you have to search both logs? If a production app isn't behaving, how do you isolate which one is is, or do you just deploy debugging code to both instances?
We aren't having issues with response time or server load. This seems like added complexity in exchange for a limited upside. It seems like it may be overkill to me. Thoughts?
You're enumerating the problems that arise when you need high availability :)
You need to consider how critical is the availability of the service and take that into account when defining what is the right solution or just over-engineering :)
Solutions to some caveats:
To avoid nightly syncs: Use an EC2 with NFS server and mount share in both EC2 instances. (Or use Amazon EFS when it's available)
Debugging problem: You can configure the EC2 instances behind the ELB to have public IPs, limited in the Security Groups just to the PCs of the developers, and when debugging point your /etc/hosts (or Windows equivalent) to one particular server.
Logs: store the logs in S3 (or in the NFS server commented above)

Architecture for Amazon Web Services / AWS

We're planning to upgrade our AWS to more recent hardware. The current setup is EC2 Classic intance-based servers attached to volumes which contain all of the apps data. The concept behind it is that if one of the instance-based servers were lost, we could recreate the server from its AMI and re-attach the volume with the data an be up and running again.
As we upgrade to servers as EC2/EBS volumes (and into a VPC), the risk of server being destroyed is mitigated. Is it worth it just keep all the apps on the new servers and forget about keeping them on attached volumes?
The strategy we used when moving to VPC was to make as few changes as possible. Just launch new servers in a public VPC, nothing else. If you start to change too much at once and encounter a problem, it will be harder to identify the source of the problem because there are so many changes.
For that reason, I recommend keeping your current setup. There might be advantages of a new architecture, but wait until after you've safely migrated to VPC.