AWS Lambda error There was an error loading Log Streams - amazon-web-services

When I go to the Logs page the below error shows.
There was an error loading Log Streams. Please try again by refreshing this page.
Problem is there is another function that is identical except the code that is creating log files no problem.
Any suggestions?

I solved it.
I added CloudwatchLogsFullAccess and then it took some time under an hour and then it was working.
I'm not sure why I needed to do this for the second function but not the first but it's working now.
Below is the link that helped me.
https://blogs.perficient.com/2018/02/12/error-loading-log-streams/

Make sure your Lambda has already logged at least once!
Appears to me that this error occurs if that is not the case - I've tested fresh Lambdas both with and without any log statements to confirm: Without any log statements, a corresponding Log Group for the Lambda does not exist yet; after the first log statement is made, the statement then exists in a seemingly-newly-made corresponding Log Group.
Although this may seem obvious/intuitive after-the-fact, this is how I ran into this scenario: I think before any logging had occurred on my new Lambda, I tried to hook it up to CloudWatch events - I tried after that attempt to see if the Lambda was invoked (by the events) via viewing 'Monitoring' tab -> 'View logs in CloudWatch' button - and that is where I encountered this error. The Lambda had not been invoked [CloudWatch events hookup had failed], so no logging had occurred, and thus there was no corresponding Log Group made yet to examine (when trying to hyperlink into it from the Lambda Configuration).
(Fwiw, I imagine maybe a corresponding Log Group could be manually made before the first logging, but I have not tested that.)

Ensure your Lambda's Execution Role has a Policy that allows writing to CloudWatch Logs from your Lambda.
IAM console -> 'Roles' -> < your Lambda's role > -> 'Permissions' tab -> 'Permissions policies' accordion
Ensure there is a Policy listed that has parameters set like this:
'Service': "CloudWatch Logs"
'Access level': includes at least "Write"
'Resource': your Lambda is not excluded (i.e: its not set to another specific Lambda, or another directory of Lambdas, or another resource type)
'Request condition': does not preclude the context of your given Lambda execution
An example of an "AWS managed policy" that meets these requirements [out-of-the-box, being that it is AWS-managed] is "AWSLambdaBasicExecutionRole". It has these parameters:
'Service': "CloudWatch Logs"
'Access level': "Limited: Write"
'Resource': "All resources"
'Request condition': "None"
If your Role does not have such a policy already, either add a new one or edit and existing one to have the requirements listed here - then this error should be resolved.
For example, in my case before I fixed things, my Lambda's Role had a policy that was based off [AWS-managed] "AWSLambdaBasicExecutionRole", but somehow had a Resource that was limited to a different Lambda (which was my problem - insufficient permission to meet that policy from my different intended Lambda). I fixed this by adding the original [AWS-managed] "AWSLambdaBasicExecutionRole" Policy to my intended Lambda's role (I also deleted the prior-said Policy as it wasn't used by anything else, but that probably wasn't strictly necessary [although nice to tidy up]).

I resolved it by attaching CloudWatchFullAccess policy to the execution role of my lambda function

Related

Cloudformation Registry not creating log groups when submitting private resource type

I am developing a private resource type for AWS Cloudformation Registry. I have designed my model schema and developed my handler code, submitted it, and even successfully deployed a stack with my very own private resource type. Yay.
What i need to do now is inspect the logging thereof. As I had generated the scaffolding using the cfn init command, i merely added logging entries to the existing logger object.
e.g.
# Use this logger to forward log messages to CloudWatch Logs.
LOG = logging.getLogger(__name__)
TYPE_NAME = "Myself::Test::Resourceful"
resource = Resource(TYPE_NAME, ResourceModel)
test_entrypoint = resource.test_entrypoint
#resource.handler(Action.CREATE)
def create_handler(
session: Optional[SessionProxy],
request: ResourceHandlerRequest,
callback_context: MutableMapping[str, Any],
) -> ProgressEvent:
model = request.desiredResourceState
progress: ProgressEvent = ProgressEvent(
status=OperationStatus.IN_PROGRESS,
resourceModel=model,
)
# TODO: put code here
LOG.info('Creating....')
According to the literature,
When you register a resource type using cfn submit, CloudFormation
creates a CloudWatch log group for the resource type in your account.
This enables you to access the logs for your resource to help you
diagnose any faults. The log group is named according to the following
pattern:
/my-resource-type-stack-ResourceHandler-string
Now, when you initiate stack operations for stacks that contain the
resource type, CloudFormation delivers log events emitted by the
resource type to this log group.
When submitting my resource type however (and even deploying it), a cannot see any LogGroup created in CloudWatch whatsoever. There is clearly something i am missing here.
Please help me understand how to find the logging for my private Cloudformation registry resource types.
Of course, i will be happy to provide any additional info needed. Thank you!
Got it.
Thanks to #maslick, all i had to do was explicitly set an appropriate logging level.
# Use this logger to forward log messages to CloudWatch Logs.
LOG = logging.getLogger(__name__)
LOG.setLevel(logging.INFO) # <- this line was missing
When deploying stack, log groups appears. Yay.

Filter AWS Cloudwatch Lambda's Log

I have a Lambda function and its logs in Cloudwatch (Log group and Log Stream). Is it possible to filter (in Cloudwatch Management Console) all logs that contain "error"? For example logs containing "Process exited before completing request".
In Log Groups there is a button "Search Events". You must click on it first.
Then it "changes" to "Filter Streams":
Now you should just type your filter and select the beginning date-time.
So this is kind of a side issue, but it was relevant for us. (I posted this to another answer on StackOverflow but thought it would be relevant to this conversation too)
We've noticed that tailing and searching logs gets really slow after a log group has a lot of Log Streams in it, like when an AWS Lambda Function has had a lot of invocations. This is because "tail" type utilities and searching need to connect to each log stream to run. Log Events get expired and deleted due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs to get searched.
You can also use CloudWatch Insights (https://aws.amazon.com/about-aws/whats-new/2018/11/announcing-amazon-cloudwatch-logs-insights-fast-interactive-log-analytics/) which is an AWS extension to CloudWatch logs that gives a pretty powerful query and analytics tool. However it can be slow. Some of my queries take up to a minute. Okay, if you really need that data.
You could also use a tool I created called SenseLogs. It downloads CloudWatch data to your browser where you can do queries like you ask about. You can use either full text and search for "error" or if your log data is structured (JSON), you can use a Javascript like expression language to filter by field, eg:
error == 'critical'
Posting an update as CloudWatch has changed since 2016:
In the Log Groups there is a Search all button for a full-text search
Then just type your search:

AWS Lambda: error creating the event source mapping: Configuration is ambiguously defined

There was an error creating the event source mapping: Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type.
I created an event earlier from the GUI console 6-7 days ago and it was working fine. The next day the event just missing, i cant see it anymore at the Lambda console GUI. But every S3 objects still seems triggering the lambda function not a problem. If i cant see, it is not good; So i deleted the Lambda function, waited for 5-10 seconds before creating another new function. And now, i receive the same above when i try to create the event sources like this:
When i click "Submit" the event sources tab says "You do not have any event sources for this function", Lambda does not get triggered; it means the entire application flow is now broken :(
The problem is almost the same as: "https://forums.aws.amazon.com/thread.jspa?messageID=670712򣯸" But somehow i cant reply to that thread, so i created a new thread here instead. anyone encounter this issue?
In fact, i try to response to the existing AWS forum thread: https://forums.aws.amazon.com/thread.jspa?messageID=670712&#670712
but i keep getting this funny error: "Your message quota has been reached. Please try again later.". And i wasnt even posting anything, how can i use up my quota?
What I suspect is your S3 bucket may still be "linked" to the lambda function.
Maybe check your S3 bucket for events and remove them there, then try creating the lambda events again?
i.e. S3 bucket-> properties-> Events
After 6 years nice to see some people still befitting from this answer,
Here is a shamless plug to youtube video I uploaded 2022-12-13.
https://www.youtube.com/watch?v=rjpOU7jbgEs
The issue must be that the s3 bucket is already linked with the suffix/prefix you are trying to link. Remove the link in S3 and try again.
When you setup a lambda function and setup a trigger related to S3. The notification gets updated in the properties sections of that S3 bucket.
The mentioned error occurs when the earlier lambda function is deleted and you're trying to setup same kind of trigger again. This time the thing to note is, the S3 notification is still not deleted when you deleted the lambda function.
Goto S3 bucket > Properties > Event notifications
and delete the old setting and then setup new trigger in the new lambda function trigger.
Here is a link to a youtube video profiling this issue and demonstrating the solution:
https://www.youtube.com/watch?v=1Tfmc9nEtbU
Just as Ridwaan Manuel, you must remove the events by going to S3 bucket-> properties-> Events as the video shows.
Steps to reproduce this issue:
Create a bucket and create a folder called “example/”
Create Lambda Function
Add S3 trigger to the lambda using the bucket from (1) with default settings
Save the trigger
Click Save and notice error
Refresh the page and notice that the triggers disappeared
Add the same bucket again and notice the ambiguous reference error

Is it possible for S3 notifications to SQS to fail?

I have setup an S3 bucket to publish messages for each PUT and POST actions. Files get uploaded to that bucket using CLI. It does work fine but out of 4 files pushed sequentially, only one triggers a message. I am not sure that this has always happened but it is happening consistently now. Note that it does not happen when I upload file manually (i.e. I always get a message per file).
I have made sure that there is no downstream system processing the messages (as a confirmation, I still see the original message triggered after the first file).
Is there any reason to believe that this AWS feature is not reliable? Since this is unlikely, what could be the problem here?
As suggested by Michael in the comment, the problem was that the bucket only listened to s3:ObjectCreated:Put. What was happening is that all other files but the first one were uploaded using multipart which was not triggering any message creation.
I modified the bucket to trigger messages on s3:ObjectCreated:* and it now works as expected.
Inspired by RaySF answer, I've fixed the issue directly in the AWS console.
Sign in AWS console
S3
Find your bucket and click on it
Properties tab
Events
Edit the related event
Change from PUT to All object create events

Unable to select a topic for Amazon Transcoder

I want to have my Amazon Transcoder post a notification to a SNS topic when some events happen, but unfortunately I'm getting an error message when I try to select the existing topic from the "Edit Pipeline" page: "Role ARN is invalid: does not start with arn"
I'm not sure what I'm doing wrong, apparently it seems pretty straightforward. Here's the steps I've done:
Selected my Pipeline
Click "Edit"
Went to "Notifications" sections and created a new topic right there (here I would expect it to have auto selected after the creation but apparently it just creates and nothing else happens..)
so.. I selected "Use an existing SNS topic" and selected the recently saved topic
Hit "Save" button
Got the error
What am I doing wrong?
Thanks in advance
This seems to be a rather weird UI bug in the AWS console.
When you edit your pipeline to set the SNS topic, the select box for the pipelines IAM role is incorrectly set to -- Console Default Role -- (it should be Elastic_Transcoder_Default_Role. The entries beginning with -- probably shouldn't be selectable at all!)
Now, when saving the form, this will cause the error.
This is not related to the selection of the Topic, as it might seem.
By changing the Role back to Elastic_Transcoder_Default_Role the error goes away.