Received this error indicated in theError in terminal
Image of yaml file
I tried researching the issue. Im honestly not sure where to start in diagnosing the issue in logical order.
TTL settings should be in AWS::CloudFront::Distribution CacheBehavior, not directly under AWS::CloudFront::Distribution.
Related
As said in the title, I am attempting to put a CloudWatch Agent (CW agent) on my On-Premise-Server (OPS).
After running this line of code that I got from the AWS User Guide to start the CW agent:
& $Env:ProgramFiles\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1 -m ec2 -a start
I got this error:
****** processing cwagent-otel-collector ******
cwagent-otel-collector will not be started as it has not been configured yet.
****** processing amazon-cloudwatch-agent ******
AmazonCloudWatchAgent has been started
I did/do not know what this was so I searched and found that when someone else had this issue, they did not create a config file.
I did create a config file (named config.json by default) using the configuration wizard and I am still having the issue.
I have tried looking into a number of pages on that user guide, but nothing has resolved the issue.
Thank you in advance for any assistance you can provide.
This message is info and not an error.
CloudWatch agent is bundled with the AWS OpenTelemetry collector agent. They're actually two agents. CloudWatch agent and Otel collector have separate configuration files. If you provide a config for one and not the other, it will only start the one that is configured. This is expected behavior.
Thank you for taking the time to answer. I have since resolved the issue (recently).
Everything from the command I was using to the path where the file resided was incorrect.
Starting over and going through all the steps again with background information helped.
The first installation combined with learning everything for the first time produced the issue.
Anyone having this issue I recommend that when you hit a wall like this you start over. I know it is not what anyone wants to do, but in the end it saved time.
So I had a working configuration with fluent-bit on eks and elasticsearch on AWS that was pointing on the AWS elasticsearch service but for cost saving purpose, we deleted that elasticsearch and created an instance with a solo elasticsearch, enough for dev purpose. And the aws service doesn't manage well with only one instance.
The issue is that during this migration the fluent-bit seems to have broken, and I get lots of "[warn] failed to flush chunk" and some "[error] [upstream] connection #55 to ES-SERVER:9200 timed out after 10 seconds".
My current configuration:
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 50MB
Skip_Long_Lines On
Refresh_Interval 10
Ignore_Older 1m
I think the issue is in one of those configuration, if I comment the kubernetes filter I don't have the errors anymore but I'm loosing the fields in the indices...
I tried tweeking some parameters in fluent-bit to no avail, if anyone has a suggestion?
So, the previous logs did not indicate anything, but I finaly found something when activating trace_error in the elasticsearch output:
{"index":{"_index":"fluent-bit-2021.04.16","_type":"_doc","_id":"Xkxy 23gBidvuDr8mzw8W","status":400,"error":{"type":"mapper_parsing_exception","reas on":"object mapping for [kubernetes.labels.app] tried to parse field [app] as o bject, but found a concrete value"}}
Did someone get that error before and knows how to solve it?
So, after looking into the logs and finding the mapping issue I ssem to have resolved the issue. The logs are now corretly parsed and send to the elasticsearch.
To resolve it I had to augment the limit of output retry and add the Replace_Dots option.
[OUTPUT]
Name es
Match *
Host ELASTICSERVER
Port 9200
Index <fluent-bit-{now/d}>
Retry_Limit 20
Replace_Dots On
It seems that at the beginning I had issues with the content being sent, because of that the error seemed to have continued after the changed until a new index was created making me think that the error was still not resolved.
This is a really strange one as it started throwing errors over night - it's been working fine up until yesterday - this morning it's been playing all day.
I'm using illuminate/filesystem in my project and for the endpoint I was using:
https://s3.eu-west-2.amazonaws.com
This morning we started getting errors saying:
Error executing "ListObjects" on "bucket-01.https://s3.eu-west-2.amazonaws.com"; AWS HTTP error: cURL error 1: Protocol "bucket-01.https" not supported or disabled in libcurl (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
File: .../vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php
Line: 195
Seeing that it tries to prepend bucket name before the protocol of the endpoint I've decided to remove protocol from the endpoint - making it
s3.eu-west-2.amazonaws.com
Now I'm getting error saying
Error executing "ListObjects" on "//bucket-01.s3.eu-west-2.amazonaws.com/bucket-01.s3.eu-west-2.amazonaws.com";
AWS HTTP error: Client error: GET http://bucket-01.s3.eu-west-2.amazonaws.com/bucket-01.s3.eu-west-2.amazonaws.com resulted in a 404 Not Found response
NoSuchKey The specified key does not exist.
As you can see now it appends endpoint after the initial endpoint.
Does anyone know what might have happened?
After hours of searching for the solution I've came across this issue on laravel/framework repository https://github.com/laravel/framework/issues/36694
I have a lambda function with the following permissions
whenever the lambda function is triggered, I see a log file being added to the logstream in cloudwatch.
When I try to open any of the logs, it throws Failed to load events
Unexpected error loading events error. Please help with fixing the issue.
I had the same error. The issue was: my 'Authorization' header has been modified. In my case — I added it using chrome extension for the testing purposes
For what it's worth, I saw the same error using Firefox but got the expected results using Chrome. So use Chrome to view CloudWatch.
This just happened to me. I suspect it had something to do with AWS's recent DNS issues. I had to go into my router's settings and change from dynamic DNS to static DNS. I used servers 1.0.0.1 and 1.1.1.1. Hope this helps someone!
after running AWS Elastic Beanstalk application for few weeks suddenly I can't open my application. Page simply displays an error which doesn't provide much information how to fix it.
Error
A problem occurred while loading your page: AWS Query failed to deserialize response
(and there is no more information, Googling also haven't found any answer)
So before updating my subscription and starting paying to Amazon not insignificant amount of money for being able to contact their technical support I thought I will ask here first if someone here encountered this issue.
Thanks for any suggestions.
After receiving this generic error, I was able to dig into the actual error message by using the EB CLI. In my case the CLI threw "ZIP does not support timestamps before 1980".