AWS CloudWatch logs open from middle - amazon-web-services

All of sudden the AWS CloudWatch logs started to open from the middle, or from the beginning of the log stream. They used to open from the end of the log stream showing the latest lines. I wonder if this is something that I can configure or has AWS just changed something.
It is really frustrating when you want to follow how the progresses of your lambda app but cannot do it because when you open the log in AWS it shows the first lines in that log stream, and in order to see the latest lines you need to set a custom time frame. And it doesn't allow you to set a future timestamp into the end time, which forces you to always update the end time to see the new lines. I hope there is a solution for getting it to open the trail of the log stream.

Try clicking on ALL in timeframe option? For me recently they started setting start time, and logs are visible from that time onwards, like you described, but when I click on ALL, it shows logs regularly, like it used to.
Second thing you can do is to have rolling start of logs (like, last 15 minutes, 1 hour).
To do that, add:
;start=PT1H at the end of your URL if you want last hour
;start=PT15M at the end of your URL if you want last 15 minutes
You can change numbers depending on timeframe you want

Related

CloudWatch - Delete logs after transfered

I have a CloudWatch set up on my EC2 instance to transfer logs to specific log groups.
In time, those logs can grow quite big in size so I wanted to delete them for example, on weekly basis.
I was wondering if there is any option of setting up auto-cleanup from EC2 instance, of transferred logs using Cloudwatch?
What would be the best way to achieve that?
To remove the logfiles from EC2 running Linux, you have two choices:
If you're using logfiles that already rotate based on time or other value, you can use the auto_removal option to delete them after the log agent is finished. See docs.
If you're using a file that's constantly updated, you'll need to use logrotate, which is a program invoked by CRON that will rename, compress, and delete old files. There's a good intro doc here.
If you use logrotate, here's an example config that I've found useful for high-volume log sources. It performs a rotate if the file reaches 100 megabytes, rather than just doing it every day (you'll need to run it from cron.hourly to make that useful). Most important, it enables copytruncate, which will truncate the file in-place, allowing the program to continue writing to it.
/var/log/filename.log {
rotate 7
daily
maxsize 100M
nodateext
missingok
notifempty
copytruncate
compress
delaycompress
}

Azure streaming analytics with event hub input stream position

Setup
I use Azure stream analytics to stream data into Azure warehouse staging table.
The input source of the job is a EventHub stream.
I notice when I'm updating the job, the job input event backlog goes up massively after the start.
It looks like the job starting to process the complete EventHub queue again from the beginning.
Questions
how is the stream position management organised in stream analytics
is it possible to define a stream position where the job starts (event after queued after a specific point in time for example)
So far done
I notice a similar question here on StackOverflow.
There is mentioned a variable name "eventStartTime".
But since I use an "asaproj" project within visual studio to create, update and deploy the job I don't know where to place this before deploying.
For updating job without stop, it will use previous setting of "Joboutputstarttime", so it is possible for job starting to process the data from the beginning.
you can stop the job first, then choose "Joboutputstarttime" before you will start the job.
You can reference this document https://learn.microsoft.com/en-us/azure/stream-analytics/start-job to see detailed information for each mode. for your scenario, "When last stopped" mode maybe the one you need and it will not process data from beginning of the eventhub queue.

Any way to search across all log streams in a cloud watch log group?

In AWS console, I can search for a string in all log streams of a log group? Right now, I have to go inside each log stream and then do search which takes a lot of time, if I want to search across the log streams.
Once you click the log group in the CloudWatch Logs console, but before you click into an individual log stream, there is a button at the top right of the page labeled "Search Log Group". Click that, and it will take you to a page where you can search across all logs in the log group in a given time frame.
What you need is the CloudWacth Log Insight.
It cost some money to do data scanning this way though

Where to find the number of active concurrent invocations in Google Cloud Functions

I am looking for a way to see how many concurrent invocations there are active at any point in time, e.g. in a minute range. I am looking for this as I received the error:
Forbidden: 403 Exceeded rate limits: too many concurrent queries for
this project_and_region. For more information, see
https://cloud.google.com/bigquery/
The quotas are listed here: https://cloud.google.com/functions/quotas
I am fine with having quotas, but I would like to see this number in a chart. Where can I find this?
Currently there is no way of seeing that information directly. There is a workaround though. You can do as follows:
Go to Google Cloud Console > Stackdriver Logging
At the text box that says "Filter by label or text search", click on the small arrow at the end of the text box.
Choose "Convert to advanced filter"
Type that query inside:
resource.type="cloud_function"
resource.labels.function_name="[GOOGLE_CLOUD_FUNCTION_NAME]"
"Function execution started"
At "Last hour" drop down menu, choose "Custom"
Fix the start and end time
This will list all the times that the Cloud Function was executed in the time range. If it was executed multiple times, instead of counting one by one you can use the following Python script:
Open Google Cloud Shell
Install Google Cloud Logging Library $ pip install google-cloud-logging
Create a main.py file using my GitHub code example. (I have tested it and it is working as expected)
Change the date_a_str and set it as start date.
Change the date_b_str and set it as end date.
In function_name = "[CLOUD_FUNCTION_NAME]" change [CLOUD_FUNCTION_NAME] to the name of your Cloud Function.
Execute the Python code $ python main.py
You should see a response as follows:
Found entries: [XX]
Waiting up to 5 seconds.
Sent all pending logs.

Filter AWS Cloudwatch Lambda's Log

I have a Lambda function and its logs in Cloudwatch (Log group and Log Stream). Is it possible to filter (in Cloudwatch Management Console) all logs that contain "error"? For example logs containing "Process exited before completing request".
In Log Groups there is a button "Search Events". You must click on it first.
Then it "changes" to "Filter Streams":
Now you should just type your filter and select the beginning date-time.
So this is kind of a side issue, but it was relevant for us. (I posted this to another answer on StackOverflow but thought it would be relevant to this conversation too)
We've noticed that tailing and searching logs gets really slow after a log group has a lot of Log Streams in it, like when an AWS Lambda Function has had a lot of invocations. This is because "tail" type utilities and searching need to connect to each log stream to run. Log Events get expired and deleted due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs to get searched.
You can also use CloudWatch Insights (https://aws.amazon.com/about-aws/whats-new/2018/11/announcing-amazon-cloudwatch-logs-insights-fast-interactive-log-analytics/) which is an AWS extension to CloudWatch logs that gives a pretty powerful query and analytics tool. However it can be slow. Some of my queries take up to a minute. Okay, if you really need that data.
You could also use a tool I created called SenseLogs. It downloads CloudWatch data to your browser where you can do queries like you ask about. You can use either full text and search for "error" or if your log data is structured (JSON), you can use a Javascript like expression language to filter by field, eg:
error == 'critical'
Posting an update as CloudWatch has changed since 2016:
In the Log Groups there is a Search all button for a full-text search
Then just type your search: