Track API requests to each pod on my kubernetes cluster - amazon-web-services

I am trying to track all API requests to my kubernetes cluster running on some ec2 instances. How do I go about doing this?
I am basically trying to check which IP the request is sent from, any data sent and any other discerning information.
I tried using prometheus but have not had any luck so far.

You can enable Auditing on your cluster. For specific resource, use resourceNames in the audit policy to specify the resource name.

You can setup auditing in your Kubernetes Cluster.
Refer to this link https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/

Related

Multicluster istio without exposing kubeconfig between clusters

I managed to get multicluster istio working following the documentation.
However this requires the kubeconfig of the clusters to be setup on each other. I am looking for an alternative to doing that. Based on presentation from solo.io and admiral, it seems that it might be possible to setup ServiceEntries to accomplish this manually. Istio docs are scarce in this this area. Does anyone have pointers on how to make this work?
There are some advantages to setting up the discovery manually or thru our CD processes...
if one cluster gets compromised, the creds to other clusters dont leak
allows us to limit the which services are discovered
I posted the question on twitter as well and hope to get some feedback from the Istio contributors.
As per Admiral docs:
Admiral acts as a controller watching k8s clusters that have a credential stored as a secret object which the namespace Admiral is running in. Admiral delivers Istio configuration to each cluster to enable services to communicate.
No matter how you manage contol-plane configuration (manually or with controller) - you have store and provision credentials somehow. In this case with use of the secrets
You can store your secrets securely in git with sealed-secrets.
You can read more here.

AWS CloudWatch sending logs but not custom metrics to CloudWatch

first time asker.
So I've been trying to implement AWS Cloud Watch to monitor Disk Usage on an EC2 instance running EC2 Linux. I'm interesting in doing this just using the CW Agent and I've installed it according to the how-to found here. The install runs fine and I've made sure I've created an IAM Role for the instance as is described here. Unfortunately whenever I run the amazon-cloudwatch-agent.service it only sends log files and not the custom used_percent measurement specified. I receive this error when I tail the logs.
2021-06-18T15:41:37Z E! WriteToCloudWatch failure, err: RequestError: send request failed
caused by: Post "https://monitoring.us-west-2.amazonaws.com/": dial tcp 172.17.1.25:443: i/o timeout
I've done my best googlefu but gotten nowhere thus far. If you've got any advice it would be appreciated.
Thank you
Belated answer to my own question. I had to create a security group that would accept traffic from that same security group!
Having the same issue, it definitely wasn't a network restriction as I was still able to telnet to the monitoring endpoint.
From AWS docs: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-iam-roles-for-cloudwatch-agent.html
One role or user enables CloudWatch agent to be installed on a server
and send metrics to CloudWatch. The other role or user is needed to
store your CloudWatch agent configuration in Systems Manager Parameter
Store. Parameter Store enables multiple servers to use one CloudWatch
agent configuration.
If you're using the default cloudwatchagent configuration wizard, you may require extra policy CloudWatchAgentAdminRole in your role for the agent to connect to the monitoring service.

AWS ECS custom CloudWatch metrics

I'm looking for a way to establish custom metrics over StatsD protocol for Amazon Elastic Container Service. I've found a documentation on how to establish Amazon CloudWatch Agent on EC2. It works well. However I'm failing to find a correct configuration for Dockerfile. Quite probably some set of custom IAM permissions will also be required there.
Is it possible to have Docker containers working from AWS ECS with custom metrics using StatsD reporting to AWS CloudWatch?
Rather than building your own container, you can use the one provided by Amazon. This article explains how, including a link to an example daemon service task configuration.

How does "type: LoadBalancer" creates external Load Balancer in Kubernetes?

How does Kubernetes knows what external cloud provider on it is running?
Is there any specific service running in Master which finds out if the Kubernetes Cluster running in AWS or Google Cloud?
Even if it is able to find out it is AWS or Google, from where does it take the credentials to create the external AWS/Google Load Balancers? Do we have to configure the credentials somewhere so that it picks it from there and creates the external load balancer?
When installing Kubernetes cloud provider flag, you must specify the --cloud-provider=aws flag on a variety of components.
kube-controller-manager - this is the component which interacts with the cloud API when cloud specific requests are made. It runs "loops" which ensure that any cloud provider request is completed. So when you request an Service of Type=LoadBalancer, the controller-manager is the thing that checks and ensures this was provisioned
kube-apiserver - this simply ensure the cloud APIs are exposed, like for persistent volumes
kubelet - ensures thats when workloads are provisioned on nodes. This is especially the case for things like persistent storage EBS volumes.
Do we have to configure the credentials somewhere so that it picks it from there and creates the external load balancer?
All the above components should be able to query the required cloud provider APIs. Generally this is done using IAM roles which ensure the actual node itself has the permissions. If you take a look at the kops documentation, you'll see examples of the IAM roles assigned to masters and workers to give those nodes permissions to query and make API calls.
It should be noted that this model is changing shortly, to move all cloud provider logic into a dedicated cloud-controller-manager which will have to be pre-configured when installing the cluster.

Getting Cloudwatch EC2 server health monitoring into ElasticSearch

I have an AWS account, and have several EC2 servers and an ElasticSearch domain set up to take the syslogs from these servers. However, in Cloudwatch and when investigating a specific server instance in the EC2 control panel, I see specific metrics and graphs for things like CPU, memory load, storage use, etc. Is there some way I can pipe this information into my ElasticSearch as well?
Set up Logstash and use this plugin https://github.com/EagerELK/logstash-input-cloudwatch
Or go the other way and use AWS Logs agent to put your syslogs into Cloudwatch and stop using ElasticSearch