Multiple EKS audit logs files - amazon-web-services

I have enabled EKS audit logs, in cloud logs there are two files getting generated for audit logs alone, each getting written with logs in parallel.
Why there are two files generated, is there any difference between them.

Related

Fluentd agent setup on GCP VM is not pushing logs to Logs Explorer

We have setup a fluentd agent on a GCP VM to push logs from syslog server (the VM) to GCP's Google Cloud Logging. The current setup is working fine and is pushing more than 300k log entries to Stackdriver (Google Cloud Logging) per hour.
Due to increased traffic, we are planning to increase the number of VMs employed behind the load balancer. However, the new VM with fluentd agent is not being able to push logs to Stackdriver. After the first time activation of VM, it does send a few entries to Stackdriver and after that, it does not work.
I tried below options to setup the fluentd agent and to resolve the issue:
Create a new VM from scratch and install fluentd logging agent using this Google Cloud documentation.
Duplicate the already working VM (with logging agent) by creating Images
Restart the VM
Reinstall the logging agent
Debugging I did:
All the configurations for google fluentd agent. Everything is correct and is also exactly similar to the currently working VM instance.
I checked the "/var/log/google-fluentd/google-fluentd.log" for any logging errors. But there are none.
Checked if the logging API is enabled. As there are already a few million logs per day, I assume we are fine on that front.
Checked the CPU and memory consumption. It is close to 0.
All the solutions I could find on Google (there are not many)
It would be great if someone can help me identify where exactly I am going wrong. I have checked configurations/setup files multiple times and they look fine.
Troubleshooting steps to resolve the issue:
Check whether you are using the latest version of the fluentd agent or not. If not, try upgrading the fluentd agent. Refer to upgrade the agent for information.
If you are running very old Compute Engine instances or Compute Engine instances created without the default credentials you must complete the Authorizing the agent procedures.
Another point to focus is, how you are Configuring an HTTP Proxy. If you are using an HTTP proxy for proxying requests to the Logging and Monitoring APIs, check whether the metadata server is reachable. The metadata server has to be reachable (and do it directly; no proxy) when Configuring an HTTP Proxy.
Check if you have any log exclusions configured which is preventing the logs from arriving. Refer Exclusion filters for information.
Try uninstalling the Fluentd agent and try to use Ops agent instead (note that syslog logs are collected by it with no setup) and check whether you were able to see the logs. Combining logging and metrics into a single agent, the Ops Agent uses Fluent Bit for logs, which supports high-throughput logging, and the OpenTelemetry Collector for metrics. Refer Ops agent for more information.

How can I configure Elastic Beanstalk to show me only the relevant log file(s)?

I'm an application developer with very limited knowledge of infrastructure. At my last job we frequently deployed Java web services (built as WAR files) to Elastic Beanstalk, and much of the infrastructure had already been set up before I ever started there, so I got to focus primarily on the code and not how things were tied together. One feature of Elastic Beanstalk that often came in handy was the button to "Request Logs," where you can select either the "Last 100 Lines" or the "Full Logs." What I'm used to seeing when clicking this button is to directly view the logs generated by my web service.
Now, at the new job, the infrastructure requirements are a little different, as we have to Dockerize everything before deploying it. I've been trying to stand up a Spring Boot web app inside a Docker container in Elastic Beanstalk, and have been running into trouble with that. And I also noticed a bizarre difference in behavior when I went to "Request Logs." Now when I choose one of those options, instead of dropping me into the relevant log file directly, it downloads a ZIP file containing the entire /var/log directory, with quite a number of disparate and irrelevant log files in there. I understand that there's no way for Amazon to know, necessarily, that I don't care about X log file but do care about Y log file, but was surprised that the behavior is different from what I was used to. I'm assuming this means the EB configuration at the last job was set up in a specific way to filter the one relevant log file, or something like that.
Is there some way to configure an Elastic Beanstalk application to only return one specific log file when you "Request Logs," rather than a ZIP file of the /var/log directory? Is this done with ebextensions or something like that? How can I do this?
Not too sure about the Beanstalk console, but using the EBCLI, if you enable CloudWatch log streaming (note that this would cost you to store logs in CloudWatch) for your Beanstalk instances, you can perform:
eb logs --stream --log-group <CloudWatch logGroup name>
The above command basically gives you the logs for your instance specific to the file/log group you specified. In order for the above command to work, you need to enable CloudWatch log streaming:
eb logs --stream enable
As an aside, to determine which log groups your environment presently has, perform:
aws logs describe-log-groups --region <region> | grep <beanstalk environment name>

Managing/deleting/rotating/streaming Elastic Beanstalk Logs

I am using Amazon EB for the first time. I've setup a Rails app running on linux and puma.
So far, I've been viewing logs through the eb logs command. I know that we can set EB to rotate the logs to S3 or stream it to CloudWatch.
My question here revolved around the deletion of the various log files.
Will the various logs, such as puma.log be deleted automatically or must I do it myself?
If i setup log rotations to S3, will the log files on the EC2 instance be deleted (and a fresh copy created in its place) when it gets rotated to S3? Or does it just keep growing indefinitely?
If i stream it to CloudWatch, will the same copy of the log be kept on the EC2 instance and grow indefinitely?
I've googled around but can't seem to find any notion of "Log management" or "log deletion" in the docs or on SO.
I'm using beanstalk on a LAMP project and I can answer a few of your questions.
You have to setup your log rotation policy at least on your app logs. Check if your base image already rotate this logs for you. The config should be in /etc/logrotate.conf for linux
When you use S3 logs with Beanstalk, it already tails and delete the logs after 15min. http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html#health-logs-s3location
The same copy of the log will be kept in your EC2 instance. Your log rotation policy /etc/logrotate.conf will be the one that will delete it. awslogs will keep some metadata to know which was the processed chunk of the logs so it does not create duplicates.
If you want an example on how to use cloudwatch logs with elasticbeanstalk check: http://www.albertsola.pro/store-aws-beanstalk-symfony-and-apache-logs-in-cloudwatch-logs/

Task runner is not running on my local machine

I am running task runner to perform the defined task, while running it am getting exception telling that can't upload log files to s3. After debugging the task runner application I found that, it will use ACL option to upload task runner log files to S3, due to some restrictions I should not use ACL option while uploading files to S3.
Please suggest if I can do anything to resolve this without configuring ACL on objects.
Do you mean the computational resource owner cannot have write permissions on the S3 log path? You will need to give write permissions on the log path (through ACLs)if you want task runner to upload the logs automatically to S3.
If you don't want to push the task runner log files to S3, you can disable logging by not specifying "logUri" while starting the task runner. In that case, task runner will not try to upload the log files and should not fail.

Where will I find access logs of EC2 Instance in AWS?

I need to check who has created the instance or who has stopped/terminated/rebooted instance along with time.
Use AWS Cloud Trail.
Please see the documentation: AWS CloudTrail.
You can get complete history of api calls to your account.
It is not expensive. Check pricing at: AWS CloudTrail Pricing
For Linux, log files are located under the /var/log directory and its subdirectories. Within this directory there are several log files with different names and which record different types of info. Some examples include, but are not limited to:
/var/log/message
Contains global system messages, including the messages that are logged during system startup. Includes mail, cron, daemon, kern, auth, etc.
/var/log/auth.log
Authenication logs
/var/log/kern.log
Kernel logs
/var/log/cron.log
Crond logs
https://blog.logentries.com/2013/11/where-are-my-aws-logs/
You will be able to access the details of the EC2 instance status from the console dashboard for a short period of time.
Until and unless you enable Cloud trail , you wont be able to access the logs and activities of what has happened in the AWS console some days back.
Cloud Trail requires you to use and S3 bucket to store the logs, and the cost you incur for Cloudtrail service is the cost of the space used to store logs in s3.