is it possible write watchtower logs in single file in aws - django

I have developed one small application. For logs , i'm using watchtower in aws. logs are working fine.Logs are inserted in cloudwatch by file wise logs in aws but i wants all logs to be registered in a single file only (for example api.views ) .is this possible? if yes, how?

solved this problem... i have written logs function in one file.. calling that function wherever i want to write logs.... so all logs are saving with same file name

Related

In AWS, writing larger content to a file PutFile rest API is not working

In AWS, what is the file size limit adding from command line ?
I am trying to fetch the schema ddl using dbms_metadata.fetch and trying to add to a file into AWS using PutFile Rest API of AWS. https://docs.aws.amazon.com/codecommit/latest/APIReference/API_PutFile.html
For larger schema > 60KB , everything working good without any error , but when I look back at AWS console I am not seeing the file which I have created. Means file is actually not getting created.
any idea how can I overcome this ?
The limits are described on the Quota page for AWS CodeCommit. For individual files this is 6 MB, so you should have received an error message if you were trying to upload a file larger than this. Below is from the CLI, but will be similar when using the API directly.
An error occurred (FileContentSizeLimitExceededException) when calling the PutFile operation: The maximum file size for adding a file from the AWS CodeCommit console or using the PutFile API is 6 MB. For files larger than 6 MB but smaller than 2 GB, use a Git client.
Or, via the Console:
If you're saying though that the operation was successful, but you're not seeing the file in CodeCommit, the problem is probably not related to the file size.
Please check if you've followed the right Git procedures for committing and pushing the file. And make sure that you're viewing the same branch as the one that you've pushed the file to.

To get notification to email when they are unprocessed folders in GCP

I am beginner to GCP, I want to have two folders processed and unprocessed folder
in the cloud storage bucket. whenever a files comes to the google storage bucket from any source, after which cloud function will get triggered, if the files are successfully inserted into the target such as Bigquery, the file will go into the processed folder, if not into the unprocessed folder.
I want to know how can I get alerts when the files go into the unprocessed folder or error folder??
Do I have to write a code or Should I write a cloud function or anything else which gets me alerts??
Any help will be appreciated
Thank you
As you mentioned usage of Cloud Functions is the right approach.
A simple function is required, then it should be deployed with the proper trigger associated with a bucket.
More details, with examples can be found here:
https://cloud.google.com/functions/docs/calling/storage

Where are the EMR logs that are placed in S3 located on the EC2 instance running the script?

The question: Imagine I run a very simple Python script on EMR - assert 1 == 2. This script will fail with an AssertionError. The log the contains the traceback containing that AssertionError will be placed (if logs are enabled) in an S3 bucket that I specified on setup, and then I can read the log containing the AssertionError when those logs get dropped into S3. However, where do those logs exist before they get dropped into S3?
I presume they would exist on the EC2 instance that the particular script ran on. Let's say I'm already connected to that EC2 instance and the EMR step that the script ran on had the ID s-EXAMPLE. If I do:
[n1c9#mycomputer cwd]# gzip -d /mnt/var/log/hadoop/steps/s-EXAMPLE/stderr.gz
[n1c9#mycomputer cwd]# cat /mnt/var/log/hadoop/steps/s-EXAMPLE/stderr
Then I'll get an output with the typical 20/01/22 17:32:50 INFO Client: Application report for application_1 (state: ACCEPTED) that you can see in the stderr log file you can access on EMR:
So my question is: Where is the log (stdout) to see the actual AssertionError that was raised? It gets placed in my S3 bucket indicated for logging about 5-7 minutes after the script fails/completes, so where does it exist in EC2 before that? I ask because getting to these error logs before they are placed on S3 would save me a lot of time - basically 5 minutes each time I write a script that fails, which is more often than I'd like to admit!
What I've tried so far: I've tried checking the stdout on the EC2 machine in the paths in the code sample above, but the stdout file is always empty:
What I'm struggling to understand is how that stdout file can be empty if there's an AssertionError traceback available on S3 minutes later (am I misunderstanding how this process works?). I also tried looking in some of the temp folders that PySpark builds, but had no luck with those either. Additionally, I've printed the outputs of the consoles for the EC2 instances running on EMR, both core and master, but none of them seem to have the relevant information I'm after.
I also looked through some of the EMR methods for boto3 and tried the describe_step method documented here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr.html#EMR.Client.describe_step - which, for failed steps, have a FailureDetails json dict response. Unfortunately, this only includes a LogFile key which links to the stderr.gz file on S3 (even in that file doesn't exist yet) and a Message key which contain a generic Exception in thread.. message, not the stdout. Am I misunderstanding something about the existence of those logs?
Please feel free to let me know if you need any more information!
It is quite normal that with log collecting agents, the actual logs files doesn't actually grow, but they just intercept stdout to do what they need.
Most probably when you configure to use S3 for the logs, the agent is configured to either read and delete your actual log file, or maybe create a symlink of the log file to somewhere else, so that file is actually never writen when any process open it for write.
maybe try checking if there is any symlink there
find -L / -samefile /mnt/var/log/hadoop/steps/s-EXAMPLE/stderr
but it can be something different from a symlink to achieve the same logic, and I ddint find anything in AWS docs, so most probably is not intended that you will have both S3 and files at the same time and maybe you wont find it
If you want to be able to check your logs more frequently, you may want to think about installing a third party logs collector (logstash, beats, rsyslog,fluentd) and ship logs to SolarWinds Loggly, logz.io, or set up a ELK (Elastic search, logstash, kibana)
You can check this article from Loggly, or create a free acount in logz.io and check the lots of free shippers that they support

RDS export log files

I want to export error log , general log and slow-query log from RDS Mysql.
I am done with all the necessary settings on my DB Instance.
I have exported the general log and slow-query log to file. (log_output : FILE)
What is the best approach to do this.
I am thinking to use lambda for this. But I am not able to find a suitable way to trigger my lambda function, eg : When ever a new log file is created my lambda function must be triggered.
Is it possible to push this log files events to CloudWatch directly ?
I have gone through the documentation , but I am not able to find such mechanism.
How should I proceed ?
There is no Supported Event Source for RDS in Lambda. I would suggest using a table to store logs for each kind in the parameters. And call the rotation to remove the old data to save disk space.

How to collect logs file form s3 ? elasticbeanstalk

I've enable log files rotations to amazon s3, every hour amazon create file "var_log_httpd_rotated_error_log.gz" for every instance at my elastic beanstalk environment.
first question :
the log files will not overlap ? so every time amazon save the file at s3, it also delete it from the instance and create a new one ! right ?
second question :
How could I collect all that files, I want to build a sever that collect all that files and enable me to search for text at those files !
That is what rotating means. It stops writing to one file and begins writing to a new file.
If they are being uploaded to s3, you can write code to download and index those files. Splunk or Loggly may help you here.